📄 Verified Ethereum Smart Contract dataset
Verified Smart Contracts is a dataset of real Ethereum Smart Contract, containing both Solidity and Vyper source code. It consists of every deployed Ethereum Smart Contract as of 🃏 1st of April 2022, whose been verified on Etherescan, and has at least one transaction. The dataset is available at 🤗 Hugging Face.
Component | Size | Num rows | LoC1 |
---|---|---|---|
Raw | 8.80 GiB | 2217692 | 839665295 |
Flattened | 1.16 GiB | 136969 | 97529473 |
Inflated | 0.76 GiB | 186397 | 53843305 |
Parsed | 4.44 GiB | 4434014 | 29965185 |
The raw dataset contains mostly the raw data from Etherscan, downloaded with the smart-contract-downlader tool. It normalizes all different contract formats (JSON, multi-file, etc.) to a flattened source code structure.
python script/2parquet.py -s data -o parquet
The flattened dataset contains smart contracts, where every contract contains all required library code. Each "file" is marked in the source code with a comment stating the original file path: //File: path/to/file.sol
. These are then filtered for uniqeness with a similarity threshold of 0.9. The low uniqeness requirement is due to the often large amount of embedded library code. If a more unique dataset is required, see the inflated dataset instead.
python script/filter_data.py -s parquet -o data/flattened --threshold 0.9
The inflated dataset splits every contracts into its representative files. These are then filtered for uniqeness with a similarity threshold of 0.9.
python script/filter_data.py -s parquet -o data/inflated --split-files --threshold 0.9
The parsed dataset contains a parsed extract of Solidity code from the Inflated dataset. It consists of contract classes (contract definition) and functions (function definition), as well as accompanying documentation (code comments). The code is parsed with the solidity-universal-parser.
python script/parse_data.py -s data/inflated -o data/parsed
A subset of the datasets above can be created by using the 2plain_text.py
script. This will produce a plain text dataset with the columns text
(source code) and language
.
python script/2plain_text.py -s data/inflated -o data/inflated_plain_text
This will produce a plain text version of the inflated dataset, and save it to data/inflated_plain_text
.
A large quantity of the Smart Contracts is/contains duplicated code. This is mostly due to frequent use of library code. Etherscan embeds the library code used in a contract in the source code. To mitigate this, some filtering is applied in order to produce dataset with mostly unique contract source code. This filtering is done by calculating the string distance between the surce code. Due to the large amount of contracts (~2 million), the comparison is only done in groups by contract_name
for the flattened dataset, and by file_name
for the inflated dataset.
The string comparison algorithm used is the Jaccard index.
The data format used is parquet files, most with a total of 30,000 records.
Copyright © André Storhaug
This repository is licensed under the MIT License.
All contracts in the dataset are publicly available, obtained by using Etherscan APIs, and subject to their own original licenses.
Footnotes
-
LoC refers to the lines of source_code. The Parsed dataset counts lines of func_code + func_documentation. ↩