In case you want to roll out your own mapping service and think you could save some time and money by copying Google and other mapping services, be careful because in addition to making at least one honest mistake Google lays down what are known as trap streets, streets that don’t exist and in places where no one will probably notice. The idea is that if Bing’s map of an area with the fictitious street does not include that street, that means Microsoft managed to make their own map of that particular sector. Like this:
You can tell that the one on the right is Microsoft’s because of the higher resolution and the fancy tilted angle. So as you can see, plain as day, Google’s Kerbela Street either is a sorry excuse for a cul de sac or it’s a trap. Either way, Microsoft did not steal this particular sector from Google, or they did at one point but then googled (or binged) for google maps trap streets like I just did and cleaned it up. High five Microsoft!
Had they stolen it, one small case along these lines resulted in a £20,000,000 settlement, though knowing Google they’d probably would have been content just by writing an angry blog post.
But seriously, map theft (theft in general actually) is pervasive, the thieves can be easily spotted with such tricks and they don’t always do so well in court. Internet copyright theft in general, and I’m not just talking about warez and copyrighted porn (be careful, lots of traps with copyrighted porn torrents), is everywhere and we know firsthand what it feels like to get ripped off by blog scrapers. For a reason that eludes me there is actually a plugin hosted in WordPress’s repositories that enables your WordPress site to keep an eye on various sites you specify, or just categories of sites if you don’t want to take the time to get specific, and it will not only leach content when it detects new material on the RSS, it will actually rephrase various sentences and substitute words using a thesaurus to dodge detection.
There are also WordPress plugins that plant traps, a random sequence of alphanumerics (like x470h4Dq5) into each article’s spot on your RSS feed, and then periodically it will search for that string using all the major search engines at specified intervals, alerting you if someone bootlegged your article. We tried that, then realized we were just playing whack a mole sending threatening letters to the scrapers’ ISPs to get them to stop every damn day, it consumed too much of our time and energy to the point we we writing much less. But man wouldn’t you get so effing pissed if someone was doing that to you and making money from it? Yes?
Okay, now imagine this, this one’s the kicker – one of these blog scrapers got accepted by Google News while you got rejected, but there’s your article on Google News but on some asshole’s site, not yours, with his ads. This is my single biggest grievance with Google, that they don’t seem to try too hard to clean this problem up when they are uniquely capable of attacking it. Most of them are from continents like Asia so it’s not that easy to get them to comply with your request to stop stealing your shit.
Okay, deep breath, I’ve hit the point of tl;dr.