diff --git a/public/img1.png b/public/img1.png new file mode 100644 index 0000000..4775072 Binary files /dev/null and b/public/img1.png differ diff --git a/public/img2.png b/public/img2.png new file mode 100644 index 0000000..6c50380 Binary files /dev/null and b/public/img2.png differ diff --git a/public/img3.png b/public/img3.png new file mode 100644 index 0000000..e2ccf59 Binary files /dev/null and b/public/img3.png differ diff --git a/public/img4.jpg b/public/img4.jpg new file mode 100644 index 0000000..ba0ba3a Binary files /dev/null and b/public/img4.jpg differ diff --git a/public/img5.png b/public/img5.png new file mode 100644 index 0000000..b24cb63 Binary files /dev/null and b/public/img5.png differ diff --git a/public/img6.png b/public/img6.png new file mode 100644 index 0000000..10fb541 Binary files /dev/null and b/public/img6.png differ diff --git a/src/app/blog/posts/1/page.tsx b/src/app/blog/posts/1/page.tsx index 57d878c..ddf99be 100644 --- a/src/app/blog/posts/1/page.tsx +++ b/src/app/blog/posts/1/page.tsx @@ -40,7 +40,7 @@ export default function ExamplePost() { document.head.appendChild(metaOgDesc); } metaOgDesc.setAttribute('content', description); - }, []); + }, [pageTitle]); return ( diff --git a/src/app/blog/posts/2/accuracy.json b/src/app/blog/posts/2/accuracy.json new file mode 100644 index 0000000..1f52ac4 --- /dev/null +++ b/src/app/blog/posts/2/accuracy.json @@ -0,0 +1,34 @@ +{ + "data": {"url": "./scatter.csv"}, + "transform": [ + { + "calculate": "1 - abs(datum.y - 3.141592653589793) / 3.141592653589793", + "as": "closeness_to_pi" + } + ], + "mark": {"type": "point", "size": 30}, + "width": 300, + "height": 300, + "encoding": { + "x": { + "field": "log10_x", + "title": "Iterations", + "axis": { + "labelOverlap": "greedy", + "labelExpr": "'10^' + datum.value" + } + }, + "y": { + "field": "closeness_to_pi", + "title": "Accuracy (% close to π)", + "scale": { + "zero": false, + "reverse": true + }, + "axis": { + "labelOverlap": "greedy", + "format": ".1%" + } + } + } +} \ No newline at end of file diff --git a/src/app/blog/posts/2/metadata.ts b/src/app/blog/posts/2/metadata.ts new file mode 100644 index 0000000..e46db53 --- /dev/null +++ b/src/app/blog/posts/2/metadata.ts @@ -0,0 +1,23 @@ +import { Metadata } from "next"; + +// Post content +export const title = "dnsimg - storing images in txt records"; +export const description = "I was intrigued by the idea of storing images in DNS records, and I wanted to test out how effectively images could be stored in DNS records. I've always been interested in TXT records because they seem to be a useful way of storing arbitrary data, and in this blog post I'll discuss how I went from an idea to developing the project into almost a protocol sort of method for storing an image on a domain name."; + +// Next.js metadata +export const generateMetadata = (): Metadata => { + return { + title: title, + description: description, + openGraph: { + title: title, + description: description, + type: 'article', + }, + twitter: { + card: 'summary', + title: title, + description: description, + }, + }; +}; diff --git a/src/app/blog/posts/2/page.tsx b/src/app/blog/posts/2/page.tsx new file mode 100644 index 0000000..a836032 --- /dev/null +++ b/src/app/blog/posts/2/page.tsx @@ -0,0 +1,244 @@ +"use client" +import { title, description } from './metadata'; +import Link from 'next/link'; +import { PageTransition } from '@/components/PageTransition'; +import { useEffect } from 'react'; +import BackButton from '@/components/BackButton'; +import Image from 'next/image'; + +import CodeBlock from '../../../../components/code' + +// Note: Static metadata is also generated by the generateMetadata export in metadata.ts +// This useEffect hook ensures the title is properly set on the client side + +export default function ExamplePost() { + const pageTitle = `${title} | Asher Falcon`; + + // Update the title and metadata on the client side + useEffect(() => { + document.title = pageTitle; + + // Update meta tags + const metaDescription = document.querySelector('meta[name="description"]'); + if (metaDescription) { + metaDescription.setAttribute('content', description); + } + + // Update Open Graph tags + let metaOgTitle = document.querySelector('meta[property="og:title"]'); + if (!metaOgTitle) { + metaOgTitle = document.createElement('meta'); + metaOgTitle.setAttribute('property', 'og:title'); + document.head.appendChild(metaOgTitle); + } + metaOgTitle.setAttribute('content', title); + + let metaOgDesc = document.querySelector('meta[property="og:description"]'); + if (!metaOgDesc) { + metaOgDesc = document.createElement('meta'); + metaOgDesc.setAttribute('property', 'og:description'); + document.head.appendChild(metaOgDesc); + } + metaOgDesc.setAttribute('content', description); + }, [pageTitle]); + + + return ( + +
+
+
+ +
+ {title} +
+
+
+

{description}

+
+
+
+

So, an image inside DNS? How can it be done? Well the most obvious way and the method I tried here was storing the data inside TXT records. Firstly, we need to find a way to store the image inside the dns records. At first the method I tried was simply getting the hex characters of the data. We get this using the command below:

+
+ + output.txt`} /> + +
+

This will not be as efficient as storing the data in base64 format, as this will use 2x the space where base64 would only use 1.33x the file size, however for testing I believe it is fine to use for now.

+

The next hurdle is as you can see below, when I tried to just add all the hex data in one txt record, cloudflare shows us an errror:

+
+
+ output +
+
+

So, we need to split our hex data into 2048 character chunks. A simple python script can be written to do this and found below:

+
+ + + +
+

This will create a txt record for each chunk of the image, and a 'dnsimg-count' record for the total number of chunks. The count is neccesary so that when we want to load the iamge, we know how many chunks exist and how many we need to request.

+
+ + + +
+

We can then upload the dns file to cloudflare and import it, which will create all the records for us. After a few minutes using the dig command we can see that the chunks have been stored. Cloudflare splits them up into additional chunks per record but that is not an issue as we can just concatenate them.

+
+ +
+ output + output +
+ +
+

Now we know that our data is out there, lets try rebuild the image from the dns records. We can write another simple python script to fetch them using dig asynchronously and then concatenate them into a single file, in jpg format. See the script below:

+
+ + + +
+

When I first tried this I don't think the records had properly propagated, so I had to wait a few minutes before I could see the image. Look below to see the (slightly) corrupted image created when a few records were missing:

+
+ +
+ output +
+ +
+

After waiting another 10 or so minutes, we can run it again and get the full image through! The image is stored in 21 chunks of 2048 characters, and is not a terribly high resolution but it serves as a good first proof of concept:

+
+ +
+ output +
+ +
+

Next I wanted to try some larger images, which mostly worked but I found an upper bound when I tried a (over 1MB) image. Not sure if this is a cloudflare limit or a wider rule but heres the error I got:

+
+ +
+ output +
+ +
+

So, finally I created a lovely web tool you can try out here which allows you to type a domain and load its image. I created images on the domains 'asherfalcon.com' and 'containerback.com' but you should try add images to your own domains! You can use a domain or any subdomain and use the scripts in the repository here to create your own image. If you want to see a video of the web tool in action see below:

+
+ +
+