RonAmok!

The adventures of an analog engineer and digital storyteller who studies emerging networks and their impact on the great game of business.
Mar 6, 2012

When I was a young engineer in the mid 1980’s, I was fascinated with the concept of neural networks–computing machines that attempted to solve problems similarly to the way that people solved them. You see, the human brain differs from the machine that we’ve come to know as the “computer.” Computers are built upon a very powerful processor that can execute millions of instructions per second. The human brain, on the other hand, consists of billions of simple processors (neurons) that are highly interconnected with one another. Computers process things serially (one after another), while humans compute things in parallel (all at once). Each method has its respective strengths and weaknesses.

Computers are better than humans at repetitive tasks, which makes them the right choice when problems need to be brute-forced through quickly. And although people can perform repetitive tasks, their forte is assessing complex situations. The differences in each process are profound. For example, if we asked a computer and a person to calculate pi to the one-millionth digit, a computer would do so in the blink of an eye, while a person would require a month of Sundays. Yet, if we asked that same computer to sort zoo photographs by species, a child would finish the task before the computer could determine the difference between an animal and a zookeeper. Computers may be able to calculate the live path of an asteroid traveling through a gravitational field with ease, but when it comes to discerning a monkey from a mongoose, they are not smarter than a 5th grader.

For some reason, billions of simple, highly-interconnected neurons are much better at pattern recognition than powerful individual microprocessors.

For many years, scientists have been trying to emulate the brain’s method for solving problems through the creation of Artificial Neural Networks (ANNs). The problem with neural networks is that they need to be “taught” through a trial and error process, much like humans are taught a new language, mathematics, or music. And although we’ve found some limited applications for ANNs, we still haven’t found a cost-effective use for them to solve many real-world problems.

But, perhaps we’ve been looking at the problem the wrong way. What if, instead of building application-specific computers based on the type of problem to solve, we created a hybrid system that used the best attributes of human and machine computation?

Hybrid Human-Machine Computing

Professor Luis von Ahn’s mission is to “…build systems that combine humans and computers to solve large-scale problems that neither can solve alone.” If you don’t recognize von Ahn’s name, you probably recognize his work–he helped developed CAPTCHA, the web-based challenge system that allows websites to determine the difference between machines and humans. CAPTCHA, which stands for Completely Automated Public Turing test to tell Computers and Humans Apart, is built on the fundamental principle that human brains are better at pattern recognition than computers. Since people have better abilities to identify highly distorted letters than computers, CAPTCHA provides website owners and their customers a level of security with respect to web-based transactions. According to von Ahn, the CAPTCHA verification process happens about 200 million times per day [1]–a number that got him thinking differently about another problem that he was working on–the accurate digitization of books.

Companies such as Google and Amazon have large scale projects to digitize books. The process involves scanning physical books and converting the captured images to text through optical character recognition (OCR) software. And while OCR technology is effective at translating clear and perfectly aligned images, it makes many mistakes while translating less-than-perfect ones.

But that’s when professor von Ahn got an idea. If he could find a way to tap into the collective intelligence manifested within 200 million transactions per day, would it be possible to extract additional value from each transaction, such as helping computers with difficult OCR problems? He did so by retooling CAPTCHA to offer two words instead of one. One would be used to pass the security test while the other would help a stumped computer.

“Whenever you type the distorted characters,” Professor von Ahn explains reCAPTCHA, “not only are you authenticating yourself as a human, but in addition, you’re helping us to digitize books…With this method, we are digitizing approximately 100 million words a day, which is the equivalent of two-million books per year.”[2]

Let’s put 100 million words per day into a business perspective:

  • 100 million transactions at 10 seconds per translation represents about 277,778 person-hours per day.
  • 277,778 person-hours at 8-hours per workday forms an equivalent “project team” of about 34,722 people, which is slightly larger than Google’s workforce.
  • If project team members were paid at the federal minimum wage of $7.25 per hour, its daily payroll would cost $251,736.

But there’s more to these numbers. Even if it were economically feasible to hire 34,000 full-time employees who did nothing but translate obscure images into text, that team could never approach the efficiency of 100 million people performing the task once per day. Normal human limitations, such as fatigue or boredom, would surely slow them down.

Companies Must Think Differently

Not too long ago, the ability to access 10 seconds of 100 million people’s time to perform a menial task would have been cost-prohibitive. Today, it’s free. CAPTCHA and reCAPTCHA are examples of how hybrid networks drive down transaction costs associated with distributed labor, and companies must think about how these technologies will affect their businesses.

Many companies talk about the power of networked computers. They use fancy terms such as cloud computing, the Internet of Things, and machine-to-machine (M2M) communications. And while only a few of them are thinking past obvious uses, only a handful are pushing the boundaries of possibility, such as professor von Ahn, who is now wondering how to “…get 100 million people to help us translate the whole web into every major language, for free?”[4]

Professor von Ahn’s newest project, Duolingo, offers the following value proposition to the 1.2 billion people who want to learn another language:

If you’ll help us translate web pages, we’ll help you learn a new language for free.

Duolingo is presently in beta testing, but is already showing positive results. “The translations that we get from people are as accurate as those from professional language translators,”[5] von Ahn said, offering good news for society, yet not-so-good news for professional translators.

Hybrid computer human networks offer lessons for any CEO. They may identify untapped resources that hold the keys to unlocking the same problems that the company has been stuck on for years.

It’s time to think past the obvious. It’s time to think hybrid.

No comments

Sorry, the comment form is closed at this time.