You can download an addressbook to get connected to peers faster here. Place this file in your .osmosisd/config folder and restart your node to take effect.
The addressbook for the testnet can be found here. Place this file in your .osmosisd/config folder and restart your node to take effect.
mv addrbook.....json addrbook.json
Snapshot is made with snapshot-interval = 0, make sure your app.toml has that setting as well
# [Optional] Install and Setup osmosis first
osmosisd config chain-id osmosis-1
wget -O $HOME/.osmosisd/config/genesis.json https://media.githubusercontent.com/media/osmosis-labs/networks/main/osmosis-1/genesis.json
# [Optional] Edit config.toml to add persistent peers and other needed config (see https://github.com/osmosis-labs/networks)
# Stop Osmosis Daemon first if its running
sudo apt-get update -y
sudo apt-get install wget liblz4-tool aria2 -y
sudo su - [osmosisuser]
# change network to default/pruned/archive and mirror to Netherlands/Singapore/SanFrancisco depending on your needs
URL=`curl -L https://quicksync.io/osmosis.json|jq -r '. |select(.file=="osmosis-1-default")|select (.mirror=="Netherlands")|.url'`
aria2c -x5 $URL
OR (single thread but no double space needed on harddisk and no option for checksum)
wget -O - $URL | lz4 -d | tar -xvf -
# Compare checksum with onchain version. Hash can be found at $URL.hash
curl -s https://lcd-cosmos.cosmostation.io/txs/`curl -s $URL.hash`|jq -r '.tx.value.memo'|sha512sum -c
./checksum.sh `basename $URL`
lz4 -d `basename $URL` | tar xf -
# Start Osmosis Daemon
Making a SHA-512 of the complete download takes a long time, so instead we make a SHA-512 hash of every 1st MB block every 1GB. These hashes are then stored in a checksum file. The hash of that checksum file is stored onchain on the cosmos account as a memo. So in short you can verify the SHA-512 of the checksum file by looking up the transaction hash on-chain. Then you can use the checksum file to validate the download. A checksum.sh script is provided to do so.
Quicksync considerably improves the time it takes to re-sync nodes to the current block. We achieve this by creating various compressed archives that we deliver from high-performance services. The service is crucial for validators and other service providers who require fast deployments or quick recovery of existing services.