Yes, DNA may have splendid information density, but retrieving the data is a pain in the ass.
Also, the article is horribly bad (surprise!). They've stored 700 TB? Maybe, if you count "replicating the same 10M basepair sequence over and over" as a valid way of doing it. Also, making it about books is hilarious. Assuming 500 kB/book (books are compressible you know - most epub books I've got around are around 500 kB, incl. cover image), the whole 70 million book archive would take a staggering... 33 TB. Oh lol. I've got room for 17 TB on my boxes (incl. RAID6 redundancy on most of it), and with today's disks you could store it in a single desktop computer (9 4TB drives). Not exactly difficult, at least compared to digitizing 70 million books...
I'll just chalk this one down to "reporter hears about revolutionary new storage method, writes glowing article about it, nothing further happens, and the world moves on". Remember holographic disks (HVD)?