“Pics” (Photos, Logos, Icons, Maps) can be of great value in OSINT Investigations. This post is a roundup of resources and tricks. This guide will show you how to search, find, get, scrape, analyze digital images.

If you are searching for images there is more than Google: All big search engines have an image-search feature: Bing, Yandex, Baidu, Yahoo. Other image search engines are exalead, picsearch, photobucket or dreamstime. A good overview of Image If you are searching for press/news photos gettyimages, corbis and reuters pictures are the place to go. For images shared through social media give instagram, flickr, imgur or Pinterest a try.

When searching for images use some common sense and advanced search tools if available. Remember you can jump in time, search for faces or photos, search by color or limit your search to top-level/country domains or websites. Use a translator and try searching in other languages too.

twipho and twicsy search media posted on twitter based on geolocation, Worldcam searches Instagram by location. cree.py is a geolocation OSINT tool. Offers geolocation information gathering through social networking platforms.

Use the Location Guard Browser Extension to fake your location (set to fixed location) in order to get media from the region of interest.

Do your research to find image databases and forums based on your area of interest. To mention only a few examples:

As you might imagine high-resolution photos of control-rooms, cctv, desks, whiteboards, phones, screens, license plates, faces can reveal interesting information. Have a look at what you can find when you know what to search for:

You can use reverse search to double check images before you believe that they are current or even relevant to the story they are attached to.

Good tutorials on reverse image search are “Manual reverse image search with Google and TinEye” and “Reverse image search for anime and manga”.

A list of relevant sites and tools:

Downloading

Now that you’ve found the pictures you want to download them. This may sometimes be hindered by copy protection mechanisms (like a disabled context-menu or images loaded by javascript). Let’s take Arab Images Foundation for example:

You can see a preview on hover, but the browser context-menu is disabled by javascript and downloadable images are small and heavily watermarked. It is still possible to download images though: Open your Page Inspector/DevTools panel and open the Network (Firefox) or Resources (Chrome) tab to browse loaded images:

Another method is to open the page information menu (small information-icon next to URL in Firefox), expand the panel, click on more Information and download through the Media tab. Both methods work on ‘protected’ flickr photos:

Pay attention to filenames and directories, the URL often reveals where the high-resolution images are. A filename like image-640x480.jpg may indicate that there is an image with higher resolution like image-1200x800.jpg or image-highres.jpg.

Get ‘em all

If you need to download many images and you can’t go one by one, you’ll need to scrape or crawl. Web-Scraping is a topic too broad to be covered in detail here, but here are some basics to get you started:

Web Scrapers are tools designed to extract / gather data in a website via crawling engine usually made in Java, Python, Ruby and other programming languages.Web Scrapers are also called as Web Data Extractor, Data Harvester , Crawler and so on which most of them are web-based or can be installed in local desktops.

A Guide to Web Scraping Tools, Gareth James

I’d suggest to learn the basic syntaxt of wget so you can start mass-downloading images. Here is an example:

$ wget -nd -r -P /save/location -A jpeg,jpg,bmp,gif,png http://www.domain.com

A more advanced bash script that scrapes images from a list of urls with wget: wget images scraper

As sites grow larger and more dynamic you need more sophisticated tools. HTTrack is pretty simple to use and comes with a lot of options (proxy, custom browser agents, timeouts, limits, file inclusion/exclusion, …).

When it comes to community sites like tumblr or instagram you often need custom made scripts or developer account to connect to their API:

Analyze

Image analysis is the extraction of meaningful information from images; mainly from digital images by means of digital image processing techniques. Image analysis tasks can be as simple as reading bar coded tags or as sophisticated as identifying a person from their face.

Image Analysis, Wikipedia

Not going to cover how to analyze the visual content of images, since there is already a lot of information out there. Read this short piece on “Analyzing Images as Text Instructions” focusing on war photography and the S-P-A-T-E-R (Subject, Purpose, Audience, Tone, Effect, Rhetorical Devices/Strategies) method for analyzing visual media.

Metadata

Without going too deep into forensic analysis, it should be known by now that images contain tons of information also known as metadata. The most basic tool to extract metadata is probably Jeffrey’s Image Metadata Viewer. Just upload your image to the website and view the results.

A more advanced tool with dozens of features (Metadata extraction, GPS Localization, Mime Information, Error Level Analysis, Hash Matching, …) and a fancy interface is Ghiro.

Recommended Read: MetaUseful & MetaCreepy (Bellingcat)

Verification

Every eyewitness photo and video will contain visual clues that can help with verification. Learn more about what they are and how to find them:

How to verify images like a pro with Google Earth

Piecing together visual clues for verification

Verification Handbook

Tags:
  • osint
  • research
  • guide