Today we officially tagged the latest version of the Hive API node stack (v1.27.11).
We’ve been running various beta versions of the new stack for quite a while on api.hive.blog, and its been easy to see that it performs much better than the old stack running on most of the other Hive API nodes. The release version of the API is now accessible on
https://api.syncad.com and we’ll switch to the release version of the stack on api.hive.blog tomorrow.
With the official release of the new stack, I expect most of the other nodes will be updating to it within the next week or so, and we should see higher performance across the ecosystem.
Here’s a quick summary on some of what the BlockTrades team has been working on since my last report. As usual, it’s not a complete list. Red items are links in gitlab to the actual work done.
Upgrading everything to build/run on Ubuntu 24
One of the main changes we made across the entire set of apps was to update our build/deployment environment from Ubuntu 22 to Ubuntu 24. This required more work than expected as this also involved an upgrade to a new version of Python, which is heavily used in our testing system and also by several of our development tools, requiring changes to that python code.
We improved the
snapshot/replay processing so that you can first resume from a snapshot, then replay any additional blocks you have in your current block log, instead of requiring you your node to re-sync those blocks.
Optimizations In progress
We’re currently finishing up a few long-planned performance improvements: 1) a fixed-size block memory allocator (
https://gitlab.syncad.com/hive/hive/-/merge_requests/1525) and 2) a moving comment objects from memory into a rocksdb database.
While benchmarking the fixed-block allocator, we saw a 18% speedup in in-memory replay time and a much bigger speedup (23%) for disk-based replays. The new allocator also reduces memory usage by 1665 MB.
Moving comment objects from memory also drastically reduced the size of the statefile, which will make it easy for systems with relatively low amounts of memory to do “in-memory” replays. I’ll provide more details on this later after I’ve personally benchmarked the new code, but everything I’ve heard so far sounds quite impressive.
Upcoming work: enhancing transaction signing
We still plan an overhaul of the transaction signing system in hived. These changes will be included as part of the next hardfork and they are also tightly related to the support for “Lite Accounts” using a HAF API (my plan is to offer a similar signing feature set across the two systems to keep things simple). So the Lite Account API will probably be rolled out on a similar time frame.
Upcoming work: lightweight HAF servers
Plans to support an alternate “lightweight” version of HAF with pruned block data:
https://gitlab.syncad.com/hive/haf/-/issues/277We switched to using structured parameter/return value types for the new REST API. Only a few apps currently use the new API, and these apps (e.g. Denser and the HAF block explorer UI) have been upgraded to use the newer version of the API.
There was a fix to hivemind’s healthcheck due to an update to HAF’s schema:
https://gitlab.syncad.com/hive/hivemind/-/merge_requests/867Optimized notification cache processing to reduce block processing time
We optimized notification cache processing. Previously this code had to process the last 90 days worth of blocks (e.g. around 3m blocks) to generate notifications for users, so it consumed the vast majority of the time that hivemind needed to update on each block during live sync (it took around 500ms on a fast system). Now we incrementally update this cache on a block-by-block basis and it is 100x faster (around 5ms). Total time now for the hivemind indexer to process a block is down to a comfortable 50ms. There are a few minor issues with the new code, which we’ll resolve in a later release.
Redesigned follow-style tables to be more efficient
We also redesigned the tables and queries for managing follows, mutes, blacklists, etc. This not only reduced storage requirements, but more importantly, allows for the performance of queries related to these tables to scale well over time. In particular, functions that need to skip "muted" information should be much faster.
Configurable timeout for long API calls
Hivemind also now has a configurable “timeout” on API calls that API node operators can set to auto-kill some pathological queries that might unnecessarily load their server. By default it is set to 5s which should be appropriate for most current servers I think. Very fast servers may consider lowering this value and very slow servers may want to increase it.
- Similar to HAFAH, the REST API was modified to support structured parameter and return types.
- We added daily, monthly, and yearly aggregation data for coin balances.
- Further speed ups to sync time.
- Added limits for APIs taking a page size
- Added support for delegation processing
- Track savings balance
- New API for recurrent transfers
For anyone who wants to run a Hive API node server, this is the place to start. This repo contains scripts for managing the required services using docker compose.
- Fixes to assisted startup script: https://gitlab.syncad.com/hive/haf_api_node/-/merge_requests/89
- Various fixes to healthchecks
- There is a separate “hivemind_user” role that is used to allow for timeout of API-based queries (as opposed to indexing queries). As mentioned in the hivemind section, this timeout defaults to 5s.
- We now use “haf” prefix by default instead of “haf-world” to shorten container names.
- Tempfiles under 200 bytes aren’t logged to reduce log spam
- More database tuning settings were made based on analysis using pg_gather. In particular, work_mem was reduced from 1024MB to 64MB, which should reduce the chance for an OOM condition on a heavily loaded server.
- Support for external signature providers (like Keychain) in transaction creation process supported by Wax
- Eliminated issues reported by dependabot service specific to dependency and code based vulnerabilities
- Implemented support for MetaMask as signature provider extension. We are waiting for security audit verification (of dedicated MetaMask snap implementation supporting Hive integration) to make Hive officially supported by MetaMask. Also Hive has been included in https://github.com/satoshilabs/slips/blob/master/slip-0044.md
- Improving error information available to applications when API calls fail. First step is mostly done in Hived repo: generation of constants representing specific FC_ASSERT instances. After that, exception classes in WAX will wrap the most common error cases and then expose them to Python/TS to simplify error processing (currently complex regexp parsing is required on the client side to detect some types of errors).
- Improvements to support workerbee better (bot library).
- First working version of Python implementation. API support is still in progress, but we expect to have our first prototype for automating generation of the API call definitions for Python from the swagger.json file (same as it currently works for TypeScript) by next week.
This is a hive wallet extension allowing you to sign transactions using keys derived from your MetaMask wallet.
We are currently preparing the library for an official security audit by improving project documentation and fixing issues after an internal review.
To make joining Hive more smoothly, we created a Hive Bridge service providing basic features such as signing, encrypting (can be used in bot authentication flows where given user need to confirm its authority by encrypting some provided buffer). The service is available at:
https://auth.openhive.networkThis is a typescript library for automated Hive-related tasks (e.g. for writing bots that process incoming blocks). We recently made performance optimizations to workerbee to support large scale block processing scenarios where lots of previous blocks need to be fetched. Recent work has speed this up to 3x. We’re continuing to work on further optimizations to eliminate bottlenecks.
We officially released the healthchecker UI component. This component can be installed into a Hive web app to allow a user to monitor the performance of API nodes available to them and control which API node is used. The HAF block explorer UI was also updated to use this component.
HiveSense is a brand new HAF app to optionally allow for semantic searching of Hivemind data (i.e. Hive posts). This should solve a long-standing problem where it has been difficult for users to find older content, so it should be a very nice improvement to the ecosystem.
It works by running deep learning algorithms to generate vector embeddings for Hive posts. These can then be searched to identify posts that are semantically related (related by meaning rather by than by exactly matching words) to a user’s search term.
This project is still undergoing development and testing right now, but the code is available for public experimentation. The new repo is here:
https://gitlab.syncad.com/hive/hivesense