You can download an addressbook to get connected to peers faster here. Place this file in your .osmosisd/config folder and restart your node to take effect.
Snapshot is made with snapshot-interval = 0, make sure your app.toml has that setting as well
# [Optional] Install and Setup osmosis first
osmosisd config chain-id osmosis-1
wget -O $HOME/.osmosisd/config/genesis.json https://media.githubusercontent.com/media/osmosis-labs/networks/main/osmosis-1/genesis.json
# [Optional] Edit config.toml to add persistent peers and other needed config (see https://github.com/osmosis-labs/networks)
# Stop Osmosis Daemon first if its running
sudo apt-get update -y
sudo apt-get install wget liblz4-tool aria2 -y
sudo su - [osmosisuser]
# change network to default/pruned/archive and mirror to Netherlands/Singapore/SanFrancisco depending on your needs
FILENAME=`curl https://quicksync.io/osmosis.json|jq -r '. |select(.network=="default")|select (.mirror=="Netherlands")|.filename'`
aria2c -x5 https://get.quicksync.io/$FILENAME
OR (single thread but no double space needed on harddisk)
wget -O - https://get.quicksync.io/$FILENAME | lz4 -d | tar -xvf -
# Compare checksum with onchain version. Hash can be found at https://get.quicksync.io/osmosis-1-default.DATE.TIME.tar.lz4.hash
curl -s https://api-osmosis.cosmostation.io/v1/tx/hash/`curl -s https://get.quicksync.io/$FILENAME.hash`|jq -r '.data.tx.body.memo'|sha512sum -c
lz4 -d $FILENAME | tar xf -
# Start Osmosis Daemon
Making a SHA-512 of the complete download takes a long time, so instead we make a SHA-512 hash of every 1st MB block every 1GB. These hashes are then stored in a checksum file. The hash of that checksum file is stored onchain on the cosmos account as a memo. So in short you can verify the SHA-512 of the checksum file by looking up the transaction hash on-chain. Then you can use the checksum file to validate the download. A checksum.sh script is provided to do so.
Quicksync considerably improves the time it takes to re-sync nodes to the current block. We achieve this by creating various compressed archives that we deliver from high-performance services. The service is crucial for validators and other service providers who require fast deployments or quick recovery of existing services.