Difference between revisions of "How to super seed with a seedbox"

From Pulsed Media Wiki
Line 86: Line 86:
  scp file.torrent super@slave3.pulsedmedia.com:watch/
  scp file.torrent super@slave3.pulsedmedia.com:watch/
  scp file.torrent super@slave4.pulsedmedia.com:watch/
  scp file.torrent super@slave4.pulsedmedia.com:watch/
Continue until you have it on all nodes.
Continue until you have it on all nodes.

Revision as of 13:10, 12 April 2014

How to super seed with a seedbox?


This super seeding article is about seeding the maximum amount of data, in the least possible time by utilizing seedboxes, instead of the traditional "superseed" of consecutive blocks being seeded. Here we show you how the get the maximum amount of data into the swarm, to serve thousands of users simultaneously.

We are going to use a multitude of seedboxes to achieve this.

Who is this target for?

Content publishers, tracker maintainers etc. People who need their data swiftly to as many users as possible. For example, Blizzard releases updates via Bittorrent, and they see a huge spike of traffic upon new releases, the only concern is getting as much of it out as possible as fast as possible, not just a single source of seed like traditional "superseed" is targeted to be.

This actually does not differ that much from what Blizzard does.

What's needed

You need multiple seedboxes in order to do this, at the very least you need 1 very fast, and multiple "slave seeders". You can work with a few upto dozens or even hundreds*.

The type of seedboxes depends upon your budget, the idea behind using many is to get as many discreet I/O resources, IPs etc. involved as possible, all of these instances will have differing timings etc. ensuring faster to connect speeds to new downloaders (leechers).

  • ) Ask support to help distribute your .torrent file to the hundreds of boxes, it should be scripted.

Recommendation of resources

We recommend using many of our shared slots, this way you get access to as many discreet resources as possible with the least amount of money - you don't necessarily need a big budget cluster of dedicated servers.

For example, get a few 2012 slots for the initial fast seeds, then a bunch of 2009+, Value or Super slots, maybe in conjunction with a few dedis, depending upon your budget and needs.

How much do i need resources?

First we need to determine your target of data seeded (X) in allotted time (Y). Also we need to remember, more individual instances there is, the more stable the speeds will be.

If X=5Tb and Y=1 week:

The formula goes: X in Megabytes / Y in seconds == Bandwidth required

5 242 880 / 604 800 == 8.7Megabytes per second.

In this instance, you will do fine with just a single 2012 slot, and might get away with a single 100Mbps.

X=100Tb, Y=1 week: 104 857 600 / 604 800 == 173.4 Megabytes per second, or roughly 2Gbps.

Since on shared slots we should account for 20% bandwidth max on 1Gbps, and on 100Mbps about 50% for longer term averages, we get: 200Mbps per 1Gbps slot: 25 MB/s 50Mbps per 100Mbps slot: 6.2 MB/s Dedicated servers in practice are about 25-40% bandwidth max on 1Gbps, albeit you might be able to achieve 90% for a few days, and on 100Mbps about 90%.

We choose 2x 1Gbps for getting things started, for a combined: 50 MB/s and remainder as 100Mbps slots: 123.4 / 6.2 = 20 slots for a total of: 124 MB/s Combined: 174 MB/s


We are using the second example with 2x1Gbps and 20x100Mbps slots for this.

Torrent creation

Create your .torrent file with your favorite means directly on one of the 1Gbps slots. Choose your preferred trackers etc.

There's really nothing special on this stage.

Transferring .torrent file AND warming things up

We have multiple ways to do this, here's the preferred way.

  • Login via SSH to the node where you created the torrent file
  • Go to the directory containing your torrent file, we'll call it "file.torrent"
  • command to transfer the file: scp file.torrent USERNAME@SERVER:watch/

Transfer it to the 2nd 1Gbps node, if your username is super2 and server name is seeder2.pulsedmedia.com, the command would be: scp file.torrent super2@seeder2.pulsedmedia.com:watch/ Watch directory: You can load torrent files here, and they are automatically loaded in a while.

At this stage we should go for a short coffee break to allow the 2nd 1Gbps node snatch the data! This way we get the 2nd stage much much quicker.

Warm up the superseeding swarm

Now repeat the SCP for all the slave seeder 100Mbps slots, for example:

scp file.torrent super@slave1.pulsedmedia.com:watch/
scp file.torrent super@slave2.pulsedmedia.com:watch/
scp file.torrent super@slave3.pulsedmedia.com:watch/
scp file.torrent super@slave4.pulsedmedia.com:watch/

Continue until you have it on all nodes. The slaves should get the data at a combined rate of about 50MB/s or more and accelerating all the way. Check the state from the last loaded slave node.

Release the SWARM!

Depending on the size of the data, if it's just a few gigabytes, it's probably safe to release quite early on, depending how fast you think your end users will start snatching it up. If it starts to load on the end users within seconds on the thousands upon release, it's better to wait until the slaves are at 100% or very near 100%, otherwise there will be serious hiccups at the last few % for the swarm, delaying the end users from getting their data. That's something none of us wants!

Let's assume your package is 10GB and you expect 1000 users to snatch it up immediately, it's 10 000 gigabytes loaded up immediately. Seeding this all will take ~16hrs 21minutes, not accounting for end user to end user seeding. It's however likely you will see initial speed spikes in the range of 300 MB/s so the first few users will get it much much quicker. As time goes on, the seeding speed decreases because the fast peers have already finished and the slow ones keeps on dragging it out and consuming upload slots.

The total target in this example for 10Gb package is more than 10 000 end users. If the package is just 1Gb, then the 100Tb accounts for 100 000 end users. That's quite a few! And that is ignoring the end user to end user seeding.