What is NEAR?
NEAR is an L1 blockchain that aims to make it easy to develop, scale and deploy Web3 apps while maintaining the user-friendliness of those applications. Let me sprinkle in some facts about NEAR for proof:
NEAR is sharded, each shard is able to process transactions in parallel. Currently, there are four shards with nodes that validate all of them. However, the NEAR team has announced that tests for Phase 2 Sharding launched in January 2024. The goal was for each validator to only work on their personal shards, which would facilitate truly parallel execution and provide a substantial boost to NEAR’s scaling. One might say… sharding is near!
NEAR is Proof of Stake, which probably doesn’t need any introduction in 2024. Essentially, instead of maintaining the chain using pure computational power, NEAR opts to leverage its economy. There is no race to find any sort of nonce and no time or electricity is sacrificed. Instead, validators put their tokens on the line. Well… to be 100% accurate, NEAR is actually TPoS (Thresholded Proof of Stake), which uses a mechanism similar to an auction. You can read more about it here.
NEAR’s tooling is actively developed and constantly improved on. A prime example would be the NEAR SDK, which makes it very easy to write smart contracts designed to be deployed on the NEAR blockchain.
NEAR’s features are great, but here at Resonance, we focus on security. NEAR does have a few caveats that developers need to be aware of during development. The most important ones are:
- Storage staking
- Asynchronous Cross-Contract Calls
- Potentially… problematic storage patterns during certain conditions
In this article, we shall tackle that last one. Spoiler alert — those “certain conditions’’ are rare. Nevertheless, it is an interesting behavior that we can observe and it is beneficial to at least be aware of this.
A quick recap on how NEAR contracts’ storage works
We will be using the Rust version of NEAR SDK as a reference throughout this article. Imports (use statements) will be omitted for the sake of brevity.
In code, a contract’s storage (state) is represented as a struct. It might look like this:
#[near_bindgen]
#[derive(BorshDeserialize, BorshSerialize)]
pub struct Contract {
some_value: u128
balances: LookupMap<AccountId, Balance>
}
However, this representation is just an abstraction. Under the hood, they are stored in key-value storage. You could initialize the contract’s storage in a following manner (using the example above):
#[near_bindgen]
impl Contract {
#[init]
pub fn new() -> Self {
Self {
some_value: 0,
balances: LookupMap::new(b"b"),
}
}
}
The interesting part here is what happens during initialization of more complex data structures (like a LookupMap, for instance). You might be wondering what that b”b” part is. Well, that’s something called a prefix. It does what it sounds like it does — this will be a prefix used to identify the values in the LookupMap referred to as balances in code. We will see how this prefix is used in the serialization later.
Note, in the simple example above, using a plain b”b” as a prefix will be fine. The prefix can be anything that is possible to serialize into bytes. It is a good practice to create an enum that would hold all of the storage prefixes (also often referred to as “Storage Keys”) and use them during initialization. That way, you can be sure that there wouldn’t be a prefix clash. So, rewriting our initialization using the best practices:
#[derive(BorshSerialize, BorshStorageKey)]
pub enum StorageKeys {
BalancesMapKey,
// however many you need here
}
#[near_bindgen]
impl Contract {
#[init]
pub fn new() -> Self {
Self {
some_value: 0,
balances: LookupMap::new(StorageKeys::BalancesMapKey),
}
}
}
It does not matter what this prefix is, i.e. what is the actual value of the prefix, as long as each is different.
“Reappearing” storage
In order to set a scene and cut down on unnecessary intros, how about we start with an actual example? Consider this smart contract’s code:
use near_sdk::{
borsh::{self, BorshDeserialize, BorshSerialize},
env, near_bindgen,
store::{LookupMap, UnorderedSet},
AccountId, PanicOnDefault,
};
#[near_bindgen]
#[derive(BorshSerialize, BorshDeserialize, PanicOnDefault)]
pub struct Contract {
variable: LookupMap<u8, UnorderedSet<AccountId>>,
}
#[near_bindgen]
impl Contract {
#[init]
#[payable]
pub fn new() -> Self {
Self {
variable: LookupMap::new("map".as_bytes()),
}
}
pub fn get_values_from(&self, key: u8) -> Vec<&AccountId> {
let set = self
.variable
.get(&key)
.unwrap_or_else(|| env::panic_str("Set with that key doesn't exist"));
set.into_iter().map(|str| str).collect()
}
pub fn create_new_set(&mut self, key: u8) {
let set = UnorderedSet::new(key);
self.variable.insert(key, set);
}
pub fn add_value_to_set(&mut self, key: u8, value: AccountId) {
self.variable
.get_mut(&key)
.unwrap_or_else(|| env::panic_str("Set with that key doesn't exist"))
.insert(value);
}
pub fn remove_set(&mut self, key: u8) {
self.variable.remove(&key);
}
pub fn check_if_in_set(&self, key: u8, value: AccountId) -> bool {
self.variable
.get(&key)
.unwrap_or_else(|| env::panic_str("Set doesn't exist"))
.contains(&value)
}
}
You should be able to copy, paste, and compile it without any issues.
Looking at the code, you might already have an idea what it’s all about. Let’s see the interesting behavior that we’ve been beating around the bush about. You can deploy it yourself, but to make things easier I prefer to use a near-workspaces crate. Consider this test case:
use std::str::FromStr;
use near_sdk::borsh::{self, BorshSerialize};
use near_sdk::store::key::ToKey;
use near_workspaces::{types::NearToken, AccountId};
use serde_json::json;
// or whatever the path to your compiled contract will be
const CONTRACT_WASM_PATH: &str = "./target/wasm32-unknown-unknown/release/storage_issues.wasm";
#[tokio::test]
async fn lets_see() -> anyhow::Result<()> {
// 1. Setup
let contract_wasm = std::fs::read(CONTRACT_WASM_PATH)?;
let worker = near_workspaces::sandbox().await?;
let contract = worker.dev_deploy(&contract_wasm).await?;
contract
.call("new")
.args_json(json!({}))
.transact()
.await?
.into_result()?;
contract
.call("create_new_set")
.args_json(json!({"key": 1}))
.transact()
.await?
.into_result()?;
let stored_values: Vec<AccountId> = contract
.call("get_values_from")
.args_json(json!({"key": 1}))
.view()
.await?
.json()?;
println!("Stored values (should be empty): {stored_values:#?}");
// 2. Adding some data
let accounts = vec![
AccountId::from_str("dev-20240609222219-61784250595198").unwrap(),
AccountId::from_str("dev-20240609222221-19028740419876").unwrap(),
];
for i in 0..2 {
let account = accounts[i].clone();
contract
.call("add_value_to_set")
.args_json(json!({"key": 1, "value": account}))
.transact()
.await?
.into_result()?;
}
let stored_values: Vec<AccountId> = contract
.call("get_values_from")
.args_json(json!({"key": 1}))
.view()
.await?
.json()?;
println!("Stored values (shouldn't be empty): {stored_values:#?}");
let does_contain: bool = contract
.call("check_if_in_set")
.args_json(json!({
"key": 1,
"value": accounts[0],
}))
.view()
.await?
.json()?;
println!("Contains: {does_contain}");
// 3. Removing set, recreating it and checking the storage
contract
.call("remove_set")
.args_json(json!({"key": 1}))
.transact()
.await?
.into_result()?;
contract
.call("create_new_set")
.args_json(json!({"key": 1}))
.transact()
.await?
.into_result()?;
let stored_values: Vec<AccountId> = contract
.call("get_values_from")
.args_json(json!({"key": 1}))
.view()
.await?
.json()?;
println!("{stored_values:#?}");
let does_contain: bool = contract
.call("check_if_in_set")
.args_json(json!({
"key": 1,
"value": accounts[0],
}))
.view()
.await?
.json()?;
println!("Contains: {does_contain}");
let account = AccountId::from_str("dev-20240610161054-44872460494479").unwrap();
contract
.call("add_value_to_set")
.args_json(json!({"key": 1, "value": account}))
.transact()
.await?
.into_result()?;
let stored_values: Vec<AccountId> = contract
.call("get_values_from")
.args_json(json!({"key": 1}))
.view()
.await?
.json()?;
println!("{stored_values:#?}");
for acc in &accounts {
let does_contain: bool = contract
.call("check_if_in_set")
.args_json(json!({
"key": 1,
"value": acc,
}))
.view()
.await?
.json()?;
println!("Contains {}: {}", acc.id(), does_contain);
}
Ok(())
}
Most of the things are pretty straight forward without any surprises. During the first step, we simply deploy the contract, and then we add data and verify that it is stored in the contract’s storage. Then, we remove the set and we look at how the storage behaves. To no one’s surprise, after the set was deleted we no longer have access to it.
But… what happens if we create a new set using the same storage key? Well, we can’t do it, as it would look like we can’t access the data. But, if we query the contract to check if previous data is present in the set — we actually get an output of true! It appears that our values were not, in fact, deleted from the storage. Or at least not completely?
Now that is an unexpected behavior, at the very least!
What a tease, huh?
So what, that’s the final paragraph with no explanation on why it happens?! Yes, it is. But don’t you worry — the explanation is ready and will be posted in Part 2 of this article! Stay tuned? Yeah, stay tuned!
About the Author:
Michal Bajor is a Senior Security Engineer at Resonance. He specializes in Web3 security. He has exposure to numerous different protocols. He has experience both in Solidity and Rust programming languages with interest in other, more exotic (at least in blockchain) technologies. His favorite protocol is, unsurprisingly, NEAR.
His role at Resonance is providing top-notch security services to the clients. That most often means auditing the client’s code looking for security flaws, possible optimizations, and potential design or architectural issues.