RonAmok!

The adventures of an analog engineer and digital storyteller who studies emerging networks and their impact on the great game of business.

A few months ago, Southern California Edison installed a new smart meter onto my house. Those who read this blog regularly know that I love data, so I admit that my heart raced a little when I learned that my new meter could not only take up to four readings per hour, but that I could also access this information online.

It took a few months for the data-logging to begin, but I’m now in data-heaven. Here’s what one day of data looks like.

Figure 1: 24 readings daily

At first glance, this chart may not seem overly special. But, I contend that it represents a large step toward a very different world.

Consider the fact that before my smart meter, the power company could only measure my annual power consumption in twelve, monthly chunks. The new smart meter subdivided those chunks into 8,760 hourly readings, thus increasing the resolution of my power consumption picture by a factor of 730.

With the cost of sensors, processing power, wireless communications and cloud storage plummeting, we’re on the cusp of being able to measure things with unprecedented granularity.  Just as telescopes and microscopes opened access to worlds that were once invisible to the naked eye, devices like smart meters are too opening up new worlds. The finer we chop the data, the higher the resolution. The higher the resolution, the more visibility that we have.

For example, Figure 1 illustrates that my house burns 0.45 Kilowatt-hours (KWh) for each hour between 12:00 and 4:00 a.m.–a time when the Ploof household was sleeping. The only devices running at that time were a refrigerator, 60-watt porch light, clock radios, cable modem, wireless router, and other items that are in “sleep mode” (computers, DVR, etc…)

So, how much does it cost to run my house at its lowest electrical consumption level? At $0.17 per KWh, it costs 7.75 cents for every hour. Think that number is small? Try multiplying it by the 8,760 hours in a year. It costs me $657 per year just to run the house at its lowest level without anything else running!

Let’s play with this number a little more. According to the US Energy Information Administration, the average California household consumes 562KWh/month (6,744 KWh/yr.) Since my household consumed 8,168 KWh last year, let’s say that my usage is 21% higher than the average residence in my little town. Assuming that the average baseline power consumption of its 18,000 residents is 21% lower than mine, at a minimum, they consume:

  • 6,399 KWh/hour
  • 153,576 KWh per day
  • 56,055,240 KWh/year

which costs the residents $9.529 million annually. Start multiplying that number by the residents of the state or the residents of the country, and we’re talking huge numbers. By increasing the resolution of our data, we get to see the big picture in a totally different way.

And we still haven’t even come close to hitting the highest resolutions as power readings can be subdivided even more to offer insight into how much power is being consumed by individual appliances or fixtures. Such granularity will allow us to actually measure phantom load, the estimated (we don’t yet have the granularity to measure it) amount of electrical energy that is consumed by an appliance while plugged in yet switched off. In other words, most phantom load is just wasted energy.

According to Wikipedia, phantom load is estimated to be 10% of our power consumption. Using the baseline electrical consumptions calculated earlier, phantom load (with everything off) for my little town is estimated at 5.6 million KWh/year at an annual cost of $950,000.

$950,000 paid for wasted energy. Think about what the city could do with this amount of money.

Today’s technology allows us to cost-effectively measure the total power consumption of our homes on an hourly basis. It’s only a matter of time for when new technologies enable us to cost-effectively do the same thing for everything that we plug in, thus increasing the granularity to actually measure phantom load. With the ability to measure it, we’ll be able to do something about it–like automatically disconnecting appliances from the grid when they are not needed.

Throughout history, resolution enhancing devices like telescopes, microscopes and now smart meters have lead to new discoveries that shake our fundamental assumptions and ultimately lead to innovation. They allow us to measure things as opposed to estimating them. Since most businesses are built on measurements, this increased resolution offers fertile ground for the development of new products, services and ultimately businesses.

Are you ready for this new granular age?

Mar 19, 2012

Successful, million-dollar Kickstarter projects such as DoubleFine Adventure and Elevator Dock have proven that there is a demand for crowdfunding. Small businesses are using intermediaries like Kickstarter to help them pre-sell yet-to-be-developed products, and thousands of “backers” are lining up to accept their offers.

But what if one of these Kickstarter backers wants more than just a game, a tee-shirt, or an Elevator Dock? What if they want a piece of the company or a piece of the project? Wouldn’t it be possible for someone like Tim Schafer to sell stock in his company through a service like Kickstarter?

Not in the United States.

Although crowdfunding for equity is allowed in some European countries, investor protection laws that resulted after the Crash of ‘29 preclude the practice here. But change is in the air. Legislation is working its way through Congress to allow small businesses to sell securities under lighter SEC restrictions.

Congressman Patrick McHenry (R) of North Carolina, introduced H.R. 2930, The Entrepreneur Access to Capital Act. After some debate, H.R. 2930 passed the House by a vote of 407 to 17, where it now sits in the the Senate’s Committee on Banking, Housing, and Urban Affairs.

The bill seeks exemptions in the Securities Acts of 1933 and 1934 to allow companies to raise funds by selling securities under certain restrictions:

  • Company can raise $2 million per year by providing investors with audited financial statements. It can raise $1 million per year without providing those documents.
  • Risk to investors is limited by restricting levels of participation to the lessor of $10 thousand or 10% of an investor’s annual income.
  • Those who sell securities (whether they be the company or intermediaries) will still be subject to certain Securities and Exchange Commission filing requirements, but  neither will need to be registered “…as a broker under section 15(a)(1) of the Securities and Exchange Act of 1934 solely by reason of participation in such transaction.”

While H.R.2930 waits in the Senate, it has also been packaged with four other bills to form H.R.3606–a.k.a the Jumpstart Our Business Startups (JOBS) Act, (don’t you just love these names?) which was approved by the House, by a 390 to 23 vote. Those four other bills include:

  • H.R. 2940 The Access To Capital For Job Creators Act
  • H.R. 1070 The Small Company Capital Formation Act
  • H.R. 2167 The Private Company Flexibility And Growth Act (Still in House)
  • H.R. 4088, The Capital Expansion Act (The House version of S.1941)

And while all of the bills mentioned so far have “H.R.” in front of them, the Senate has its version too. Senator Scott Brown (R) of Massachusetts, introduced S. 1791, The Democratizing Access to Capital Act.

The largest hurdle these bills must overcome requires convincing lawmakers that investors are protected. With images of Enron and Bernie Madoff fresh in their minds, legislators will be taking a close look at the reporting exemptions from stringent reporting practices such as Sarbanes-Oxley that these bills offer small businesses.

The big questions will come down to risk. How much risk will investors be subject to? On Wednesday, March 21, 2012 the Senate’s Committee on Banking, Housing, and Urban Affairs will conduct an open hearing entitled “Examining Investor Risks in Crowdfunding” to discuss this very topic.

One of the questions that I had after reading these bills was: “And then what?” For example, H.R. 2930 restricts the selling of crowdfunded securities “…during the 1-year period beginning on the date of purchase, unless such securities are sold to–1) the issuer of such securities; or 2) an accredited investor.”

It makes for an interesting future. Let’s assume that a Crowdfunding Law is passed in 2012 and that some crowdfunded securities are sold to investors. At some time in 2013, will these securities be available to sell on the open market? If so, who will handle those sales? Will there be a new marketplace to buy such securities, like a Crowdfunding Stock Exchange?

Networked technologies have given us new ways to work together. Services like Kickstarter have given video game developers new options beyond finicky publishers. Crowdfunding for equity laws will give small businesses capital options beyond neighborhood bankers and traditional investors. If Congress can balance small business’s need for capital with an acceptable level of risk for investors, we may be entering a new era of entrepreneurialism.

What do you think?

Photo Credit: Library of Congress

Mar 6, 2012

When I was a young engineer in the mid 1980’s, I was fascinated with the concept of neural networks–computing machines that attempted to solve problems similarly to the way that people solved them. You see, the human brain differs from the machine that we’ve come to know as the “computer.” Computers are built upon a very powerful processor that can execute millions of instructions per second. The human brain, on the other hand, consists of billions of simple processors (neurons) that are highly interconnected with one another. Computers process things serially (one after another), while humans compute things in parallel (all at once). Each method has its respective strengths and weaknesses.

Computers are better than humans at repetitive tasks, which makes them the right choice when problems need to be brute-forced through quickly. And although people can perform repetitive tasks, their forte is assessing complex situations. The differences in each process are profound. For example, if we asked a computer and a person to calculate pi to the one-millionth digit, a computer would do so in the blink of an eye, while a person would require a month of Sundays. Yet, if we asked that same computer to sort zoo photographs by species, a child would finish the task before the computer could determine the difference between an animal and a zookeeper. Computers may be able to calculate the live path of an asteroid traveling through a gravitational field with ease, but when it comes to discerning a monkey from a mongoose, they are not smarter than a 5th grader.

For some reason, billions of simple, highly-interconnected neurons are much better at pattern recognition than powerful individual microprocessors.

For many years, scientists have been trying to emulate the brain’s method for solving problems through the creation of Artificial Neural Networks (ANNs). The problem with neural networks is that they need to be “taught” through a trial and error process, much like humans are taught a new language, mathematics, or music. And although we’ve found some limited applications for ANNs, we still haven’t found a cost-effective use for them to solve many real-world problems.

But, perhaps we’ve been looking at the problem the wrong way. What if, instead of building application-specific computers based on the type of problem to solve, we created a hybrid system that used the best attributes of human and machine computation?

Hybrid Human-Machine Computing

Professor Luis von Ahn’s mission is to “…build systems that combine humans and computers to solve large-scale problems that neither can solve alone.” If you don’t recognize von Ahn’s name, you probably recognize his work–he helped developed CAPTCHA, the web-based challenge system that allows websites to determine the difference between machines and humans. CAPTCHA, which stands for Completely Automated Public Turing test to tell Computers and Humans Apart, is built on the fundamental principle that human brains are better at pattern recognition than computers. Since people have better abilities to identify highly distorted letters than computers, CAPTCHA provides website owners and their customers a level of security with respect to web-based transactions. According to von Ahn, the CAPTCHA verification process happens about 200 million times per day [1]–a number that got him thinking differently about another problem that he was working on–the accurate digitization of books.

Companies such as Google and Amazon have large scale projects to digitize books. The process involves scanning physical books and converting the captured images to text through optical character recognition (OCR) software. And while OCR technology is effective at translating clear and perfectly aligned images, it makes many mistakes while translating less-than-perfect ones.

But that’s when professor von Ahn got an idea. If he could find a way to tap into the collective intelligence manifested within 200 million transactions per day, would it be possible to extract additional value from each transaction, such as helping computers with difficult OCR problems? He did so by retooling CAPTCHA to offer two words instead of one. One would be used to pass the security test while the other would help a stumped computer.

“Whenever you type the distorted characters,” Professor von Ahn explains reCAPTCHA, “not only are you authenticating yourself as a human, but in addition, you’re helping us to digitize books…With this method, we are digitizing approximately 100 million words a day, which is the equivalent of two-million books per year.”[2]

Let’s put 100 million words per day into a business perspective:

  • 100 million transactions at 10 seconds per translation represents about 277,778 person-hours per day.
  • 277,778 person-hours at 8-hours per workday forms an equivalent “project team” of about 34,722 people, which is slightly larger than Google’s workforce.
  • If project team members were paid at the federal minimum wage of $7.25 per hour, its daily payroll would cost $251,736.

But there’s more to these numbers. Even if it were economically feasible to hire 34,000 full-time employees who did nothing but translate obscure images into text, that team could never approach the efficiency of 100 million people performing the task once per day. Normal human limitations, such as fatigue or boredom, would surely slow them down.

Companies Must Think Differently

Not too long ago, the ability to access 10 seconds of 100 million people’s time to perform a menial task would have been cost-prohibitive. Today, it’s free. CAPTCHA and reCAPTCHA are examples of how hybrid networks drive down transaction costs associated with distributed labor, and companies must think about how these technologies will affect their businesses.

Many companies talk about the power of networked computers. They use fancy terms such as cloud computing, the Internet of Things, and machine-to-machine (M2M) communications. And while only a few of them are thinking past obvious uses, only a handful are pushing the boundaries of possibility, such as professor von Ahn, who is now wondering how to “…get 100 million people to help us translate the whole web into every major language, for free?”[4]

Professor von Ahn’s newest project, Duolingo, offers the following value proposition to the 1.2 billion people who want to learn another language:

If you’ll help us translate web pages, we’ll help you learn a new language for free.

Duolingo is presently in beta testing, but is already showing positive results. “The translations that we get from people are as accurate as those from professional language translators,”[5] von Ahn said, offering good news for society, yet not-so-good news for professional translators.

Hybrid computer human networks offer lessons for any CEO. They may identify untapped resources that hold the keys to unlocking the same problems that the company has been stuck on for years.

It’s time to think past the obvious. It’s time to think hybrid.