
Hello,
With the bull run coming back I've been working on a small trading bot and to do backtesting I needed to get historical data on a lot of coins, which I haven't found anywhere that is easy to use or it's as a paid service.
So I dug up the binance api and wrote my own script to get all this historical data.
Introducing binance-scribe
https://github.com/drov0/binance-scribe
It's a tool that grabs candle chart info and more. Here's an example of one data point of data that it gets :
[
1499040000000, // Open time
"0.01634790", // Open
"0.80000000", // High
"0.01575800", // Low
"0.01577100", // Close
"148976.11427815", // Volume
1499644799999, // Close time
"2434.19055334", // Quote asset volume
308, // Number of trades
"1756.87402397", // Taker buy base asset volume
"28.46694368", // Taker buy quote asset volume
"17928899.62484339" // Ignore
]
And it does that over multiple time ranges : 1h,30m,15m,5m,1m
All of this is all configureable.
How to use
Run npm install to install the dependencies
Then configure the scrapper. At the top of "Scribe.js" you'll find a configuration object :
trade_against and coins are used to generate pairs.
eg : ENJBTC ENJETH IOTABTC IOTAETH etc...
intervals are which time range you want to save. options are "1h","30m","15m","5m","1m"
start date is the date from which you want historical data, it's in milisecond unix time.
Earliest possible is 1483292280000 aka first january 2017. I picked 1 january 2019 because I don't need data that's this old
const config = {
trade_against : ['BTC', 'ETH'],
coins : ['ENJ', 'IOTA', 'STEEM', 'CDT'],
intervals : ["5m","1m"] ,
start_date : 1546315199000
};
The result will be saved in the JSON format in a folder called "data".
Over the course of the development I realized that binance has some limitations to prevent you from just spamming their api. So I included a functionality to make it wait when it reaches that limit, and I grab that limit from the api itself so if binance changes the api rate limitation in a year or so, this script will still be relevant :)
If anyone is interested in contributing, this is lacking a feature to continue from previous scrapings. I might do that in a week or two if I get some free time. Depending on the time range and start date that you pick, scraping can get quite long.
Steem on !