I've been researching this all week and, so far, there seems to be two methods to retrieve a site off of archive.org:
1.) you get a VA to copy and paste each page individually. You can get each page of a site by placing a * behind the domain name.
2.) You use warrick to download the stuff, parse it yourself, and then upload it to WordPress.
Both seems like a shit ton of boring, repetitive, mundane work. Does anyone have a better way? Or, do you struggle with this yourself?
I'll be happy to spend a few weeks researching this for a new BST if enough people are interested. The biggest problem I'm having so far is that every site's structure is different and, the older the site gets, the more poorly it was coded.
1.) you get a VA to copy and paste each page individually. You can get each page of a site by placing a * behind the domain name.
2.) You use warrick to download the stuff, parse it yourself, and then upload it to WordPress.
Both seems like a shit ton of boring, repetitive, mundane work. Does anyone have a better way? Or, do you struggle with this yourself?
I'll be happy to spend a few weeks researching this for a new BST if enough people are interested. The biggest problem I'm having so far is that every site's structure is different and, the older the site gets, the more poorly it was coded.