On 8/11/2016 3:10 PM, Valeri Galtsev wrote:
I usually am not good at explaining what I need. I really only need an image of what one would see in web browser if one point to that URL. I do not care it to be interactive. I also don't want to get the content ("mirror") of stuff that URL points to on variety of "depths" - I don't want to use wget or curl for this reason. That is what I tried first and it breaks with at lest one of the web sites - they do seem protect themselves from "robots" or similar. And we don't need it. We just need to show what they page shows today, that's all.
then screen capture is about it.... too many sites, ALL the content is dynamic, for instance, https://www.google.com/maps/@36.9460899,-122.0268105,664a,20y,41.31t/data=!3...
that page is composed of tiles of image data superimposed on the fly with ajax code running in the browser to fetch the layers displayed.
you simply can't fetch the html and make any sense out of it, the browser is running a complex application to display that.