But for read performance (which is IMO what the section in the article was motivated by), it doesn't actually matter to have a bunch of entries corresponding to dead tuples in your index, provided Postgres doesn't need to actually consider the dead tuples as part of your query.
So if you have a monotonically increasing `job_id` and that's indexed, then so long as you process your jobs in increasing `job_id` order, you can use the index and guarantee you don't have to keep reconsidering the dead tuples corresponding to jobs that already completed (if that makes sense).
[This is because the index is a b-tree, which supports efficient (O(log n) page reads for n entries) seeking on (any prefix of) the columns in the index.]
Rough intuition: Postgres doesn't immediately delete rows, it just marks them as invalid after a certain snapshot version/transaction ID (and this mark is in the heap, not the indexes, AFAIK).
Every potential tuple that your index returns, Postgres needs to visit the heap to see if that tuple is alive. UNLESS all the tuples in that heap page are alive, in which case an optimisation called the 'visibility map' allows that check to skipped (relevant for Index-Only Scans, where Postgres can get all the results for your query from the index directly).
The only way to avoid the problem is therefore to either vacuum frequently enough that Postgres doesn't get any dead tuples returned from the index (that it must then visit the heap to check for liveness), or to bake in some condition to your queries that prevents the dead tuples from being returned from the index altogether. Vacuuming frequently is expensive and conflicts with having long-running transactions, so the latter option is generally the choice to go for when it matters.
[n.b. I feel I should note I am not a Postgres developer and wouldn't call myself an expert, just an enthusiast and dealt with a few problems here and there. So what I say should be taken with a grain or two of salt, though I believe it to be accurate.]