05-05-2016, 12:39 AM
Ok so yeah you obviously will not import from harvester (in the page auth part) because at that stage, the urls have not yet been filtered out. That's why you save only the available urls in the vanity checker then use that list in the page auth section).
For some reason, this time it worked, I didn't need to create a new text file. Strange. All I can suggest is, if the example moz api client works but not scrapebox, then create a new text file in notepad or something with a couple urls, import that in to page auth plugin and see if that works.
For some reason, this time it worked, I didn't need to create a new text file. Strange. All I can suggest is, if the example moz api client works but not scrapebox, then create a new text file in notepad or something with a couple urls, import that in to page auth plugin and see if that works.