MIT $100K entrepreneurship contest
Posted May 19, 2006
Yesterday was the final award ceremony for the MIT $100K Entrepreneurship Competition, wherein teams of students compete based on the viability of their business plans for a new start-up company. The classic approach takes an invention developed at an MIT lab and runs with it, but there are other choices too, especially since this year's contest was expanded to include ideas for non-profit (but sustainable) businesses to help the developing world. It's a great opportunity for all concerned, since even beyond the prize money, there are a lot of big players in the Boston venture-capital community who pay close attention to the teams.
An early favorite of mine, Avanti, ended up as a runner-up, getting edged out by a very strong-looking biotech company. They were a favorite based on two factors: a clear technological advantage (the ability to smelt titanium using a cheaper, easier, less-polluting process), and a kick-ass spokesman (really, he gave a very memorable pitch at the semifinals). The latter shouldn't really have been a factor, per se, but beyond the strength of the business plan and the technology, the strength of the team members is also being judged, and the ability to communicate your vision clearly and memorably is a skill worth some major points, both in the contest and in the business world.
I didn't realize until the end that the smelting process had originally been developed for another purpose: extracting oxygen from lunar soil. It just so happened to produce titanium as a by-product, and the Avanti team decided to pursue that more mundane application here on the ground. Yet another example of "one person's noise is another person's data", I guess. In any case, I wish them luck with their new business, and hope they can gather as much capital as they need to get up and running.
Field-programmable object array
Posted May 10, 2006
This week there were a couple of interesting conferences in Boston, both of which provided the usual free expo passes, yay. Unfortunately, I didn't find a great deal of interest to me at Nanotech 2006, since I guess I'm still mostly in the "micro" world of MEMS and electronics rather than the true "nano" world of materials. The second conference, The Vision Show East, was much more interesting, if only for the profusion of companies running demos of their high-speed cameras grabbing frames of some spinning gizmo on their tabletop.
One booth caught my eye, advertising an FPOA chip. What, might you ask, is an FPOA chip? I did too, and they kindly explained to me their vision of a processor intermediate in configurability between an FPGA with almost gate-level flexibility and a DSP chip with none beyond its instruction set. Instead, they have a large number of functional object blocks (like a multiply-and-accumulate or an ALU) which you can arrange in a way to best match your algorithm. You gain some impressive performance this way over an FPGA (they're clocking the chip at 1GHz, during which you can transfer data between many different blocks simultaneously), and some flexibility over a DSP.
The catch is that designing is a little bit of a chore. While you have the ability to tweak things very carefully, there's no present way to, say, just write an algorithm in C and have an optimal configuration of processing units be automatically generated from that. For the type of high-end data-crunching they have in mind, that's not too big of an issue, since you would want to optimize your MPEG4 encoder as much as you possibly could by hand before you ship a million units of it.
Strange theory of quantum gravity
Posted April 27, 2006
Yesterday I trekked up to Harvard to, ironically, attend a talk by an MIT professor, Seth Lloyd. The topic was his theory of quantum gravity, which I found to be pretty novel. I've no idea if it is true, but it's one of those theories which is worth thinking hard about even in spite of being false. Plus it may well be right.
The problem is how to merge quantum theory with general relativity. These two theories are highly successful in their own domains, but unfortunately they picture the world in (what seem to be) fundamentally incompatible ways. Ideally, a fully-merged theory would allow us to speak of spacetime itself, the fabric of general relativity, in the language of quantum mechanics, with probabilities of having one configuration or another. The problem is that quantum mechanics treats spacetime as a fixed background within which all of the different particles and fields do their probabilistic dances. And GR, on the flip side, treats the particles and fields classically, without the full quantum treatment.
Lloyd's theory is interesting in that he takes a new approach, not even treating space and time in the normal "coordinate" fashion, as a backdrop against which other events happen. Instead, distances in space and time are derived quantities, constructed (if we so choose) after the fact from the known interactions of particles. That is, "if this photon hit this electron here, we know they were in the same place at the same time, and if it hits another electron over there, we know that it had to travel so far for so long at the speed of light to get there, so the electron must have been..." and so on. Do this for lots and lots of interactions, and you build up a self-consistent picture of where and when things had to have been, and how spacetime must have been curved to allow it.
Why do this? Well, now, because we don't know precisely what events took place among all the particles, due to quantum uncertainties, our picture of spacetime is also uncertain. We're looking at the fabric of spacetime in a quantum, probabilistic sense, which is precisely what we wanted to have a decent theory of quantum gravity!
At this point, my hand-waving understanding of his results starts to diverge from their actual contents, so I'll leave you to investigate the paper linked above for more information. Oh, but one last teaser: you can argue that spacetime naturally should be four-dimensional (3D plus time) from the fact that interactions in physics are between two particles... two incoming and two outgoing paths translates to four fundamental directions. If this is true, I can only say, "Whoa," in my best Keanu impression.
Energy-concentrating phenomena
Posted April 19, 2006
Ordinarily I don't attend too many physics talks off-campus, despite a rich constellation of them in the vicinity of Cambridge. Every once in a while I run across one that is just too good to miss despite a long walk, though, and yesterday was such an occasion: Seth Putterman from UCLA came to Boston University to give a lecture on energy-concentrating phenomena, such as his work on sonoluminescence and pyrofusion.
I'd been peripherally aware of these developments for a while, but I just didn't have a good appreciation for how weirdly fascinating they actually are. The general theme was how various low-energy, slow fluid and solid processes sometimes result in high-energy, fast phenomena. For a physicist like me, this immediately piques my interest, because you usually don't expect natural processes to "focus" like that: it's the sort of thing that technology does, whereas nature more prefers to smooth things out and slow outliers down.
Examples. The sonoluminescence process takes room-temperature water subjected to normal (albeit very loud) sound waves and converts that acoustic energy to picosecond-long flashes of light from bubbles excited to thousands of degrees, if not much hotter. Stranger still, the jitter from one flash to the next cycle is also on the scale of tens of picoseconds, even though the only thing providing the timing is a kilohertz-scale continuous sound wave. Very odd.
Even odder is the example of pyroelectricity. You take a normal ferroelectric crystal, and heat it up and cool it down over time scales of minutes... and it spits out X-rays and relativistic electrons! This is deeply weird. It's like you're crumpling and uncrumpling a piece of paper and every once in a while it emits a thunderclap: the energy scale is just completely wrong for what goes in versus what comes out.
Putterman and others have investigated these phenomena from the point of view of trying to harness them for sparking off tabletop fusion, with some success. Any time you have a normal continuous-media process providing you with easy access to high energies, you want to see how you can exploit that to leapfrog yourself to other high-energy processes such as nuclear reactions. But for me, I was happy to just bask in the weirdness of physics for a little while, and revel in the fact that nature can still surprise me in wonderful ways. There are some deep questions waiting to be answered here...
Massive parallelism in a single CPU
Posted April 12, 2006
I've always loved multi-processor systems, and most of my desktop machines have have dual CPUs for a while now. In the immortal words of Be's founder, "One processor per person is not enough." The trend today is to extend that down to the individual CPU too: one processor core per chip is not enough. Still, while there's a big difference between one CPU and "more than one", why should you stop at just two?
The guys at Rapport are thinking the same way, and are designing chips with hundreds or thousands of processor cores on the same piece of silicon, along with extra logic to reconfigure how they work. Granted, the individual cores aren't nearly as complicated as a Pentium or PowerPC processor, but for many applications that doesn't matter a whole lot.
Why would you want to do this? Power consumption is one big consideration: the faster you try to shove electrons around, the more juice it takes. So if you can do the same work with 10 transistors running at 1MHz instead of 1 at 10MHz, you can come out ahead. The other consideration is ease of design: to double the clock rate of a complicated processor is a massive undertaking, but if all you have to do is slap on another bunch of cores you've already designed...
In any case, I like the idea, and I like the sort of programming challenges that writing efficient code for such a chip would provoke. People have had to think about these issues a lot in the context of supercomputing, so it's about time that we can get a Connection Machine on our desks.
Electrochemical actuators
Posted April 3, 2006
Today's lesson is that "one person's noise is another person's data", in the words of astrophysicists. Or more relevantly here: one person's annoying side effect is another person's brilliant solution.
Dr. Yet-Ming Chiang from the materials science department gave a nice introductory talk today about his work with battery technology, in particular his development of a useful lithium phosphate high-power cell currently being commercialized by A123. Yay, MIT spin-offs!
But the more interesting part of the talk was about his new, mostly still unpublished work on morphing materials. Basically, he noticed that lithium batteries flex significantly during charging cycles, which is highly annoying but reasonable when you consider that a significant fraction of the atoms in them are moving between the anode and cathode during use: the materials have to stretch to accomodate the extra ions.
The new discovery, however, is that this flexing is strong, since you're using a decent amount of energy to stuff just one extra atom into the material when you charge a battery, and that translates into a lot of force you can exert. We're talking tons per square inch here, and strain sizes of 10% or so... this is suddenly very interesting as an actuator, since it completely blows away piezoelectrics.
Granted, you lose in actuation speed, since it's a diffusive process and doesn't want to happen much faster than you can charge a battery: minutes to hours. But if your intended application is, say, a gradual reconfiguration of an airfoil or a boat hull to adapt to current conditions and demands, this is perfectly reasonable... you want to exert load-bearing forces over macroscopic distances, and you can put up with slower speed.
Very nice, both for the immediate technology, and for the general lesson that you can change a nuisance side-effect into a patentable invention with just a shift in perspective.
Game addiction for computation
Posted March 23, 2006
A few days ago I attended a talk by Luis von Ahn on the subject of "human computation". He's the inventor of CAPCHAs, those "type in this distorted word" tests that attempt to differentiate between humans and automated software, and he's been doing some thinking on another side of that problem. Namely, given that we know that humans are capable of doing tasks that computers can't, and moreover are prone to waste a lot of time on frivolous stuff*, how can we put them to work in a useful way without abusing them or simply paying them?
His solution is to make games that perform useful tasks as a side effect of the gameplay. People will "work" for free because the games are fun and even addictive, and the net result is happy people and a big pile of worthwhile un-automatable data crunching. For instance, there's The ESP Game... the goal of the game is to guess what your (unknown, randomly assigned) partner is thinking, and the only leverage you have is a random image that is shown to both players. Naturally, the best strategy is to type words associated with the image, and moreover words that people would naturally think of when they see it. The game collects all these guesses until the two players agree on something, and then assigns reward points.
Guess what, that image has just been tagged in the database: a tedious task that computers can't presently do but which would be really handy to have done on the cheap (think Google image search). Moreover, the quality of the tags is pretty high, since you know that at least two people agreed on them, and you can run the same image through the game as many times as you want to double-check: the labor is essentially free.
This is a clever idea, but it worries me, because humans have bad defenses against addictive behaviors, even ones that they stumble on by themselves. Once you start adding in situations that are purposefully designed to be addictive (rumor has it that Blizzard hired psychologists to optimize the reward schedule for World of Warcraft to make it as addictive as possible), will we end up in the situation of computer games needing to being as regulated as gambling presently is? You won't lose your shirt, but you might lose a lot of your time...
* A sobering statistic included in the talk: in 2003, it is estimated that 9 billion man-hours were spent playing Microsoft Solitaire. That's the entire productive lifetimes of 100,000 people, and for comparison the Panama Canal was built with about 0.2% of that time.
How to leak a secret
Posted March 15, 2006
Cryptography is a wonderful subject to dabble in, because it relies both on mathematics and cloak-and-dagger cleverness. You not only have to be smart enough to make the mathematics work, but you have to be smart enough to ensure that your mathematical kung-fu is able to withstand determined attacks from every other cryptographer in the world, armed with resources you can only dream of. Very fun.
I recently heard of a twist on the standard digital-signature schemes which struck me as especially neat. They're called ring signatures (PDF), and they focus on the problem of anonymous whistleblowers.
You want to report your record-falsifying boss without endangering your job. If you sent in a fully anonymous tip, nobody would really believe you. So you create a ring signature, calculated from your private digital-signature key and the known public keys of the other employees in your group. Sign your anonymous email with that, and the recipient (by looking up the various public keys) can verify that someone in your group wrote the note, authenticating the information, but they can't tell which one.
This is very neat, and moreover it doesn't even depend on the details of the digital-signature scheme, so you can co-opt the public key of virtually anyone using any algorithm, and create the possibility that they miiiiight have signed your incriminating message themselves.
Uncooled thermal imaging
Posted March 9, 2006
Infrared detectors sensitive around human body temperature are great to have, but they're troublesome to build because they usually require active cooling. These guys have a clever mechanism for passively converting infrared radiation into a normal optical signal that an off-the-shelf camera can read, with no cooling required.
The way it works, as I understand it, is simply to build a narrowband interferometric filter out of a material with a large thermal expansion coefficient. Hmmm, that was a bit of a mouthful... the point is that as the material heats up, it changes the spacing between its layers, which means that lets a slightly different light wavelength through, changing color effectively. So all you have to do is put this at the focal plane of your camera, shine a reference light through it, and take an image of that: as the filter heats up according to what's in the scene, it'll become lighter or darker at your reference wavelength, letting you see the heat image with a regular camera.
A clever technique, and one which I suspect has a lot more general applicability in other fields... can you find a material which shrinks and expands in response to, say, a certain chemical, or magnetic fields, or air pressure? Then there's an easy way to extract the signal optically with this sort of method. As it is, though, just having cheap infrared cameras will be cool enough.
Note: at the time of writing the Redshift Systems server was down, but hopefully that's just temporary.
Putting the micro in microcontroller
Posted March 2, 2006
Although my present microcontrollers of choice come from Atmel, I've used the PICs from Microchip a lot in the past, and for certain things they're still useful. Specifically, the PIC 10F series is somewhat unique: they're little 6-pin microcontrollers in virtually the same form factor as a normal transistor chip, by which I mean about 3 mm long! That's very tiny, but still hand solderable with care.
Granted, the capabilities are equally tiny... we're talking half a K of program memory and a maximum of 24 bytes of RAM, but sometimes you really need just a little bit of computation right on the spot. Heck, with this sort of form factor, you can almost just sprinkle them around your board design "just in case" as an impromptu signal conditioner. "Darn, I really meant to have that line be negative true... Hey, I'll just solder on a PIC over here and program it to invert that signal!" For a task like that, 24 bytes is even overkill, after all.
The new models even add a couple of ADCs, which means that now you have an ultra-miniature sensor platform. Just enough computation to read a value and shove it down a communications bus, with enough smarts left over to manage an intelligent sleep schedule for power management.
Now I just need to figure out how to manufacture circuit boards small enough to match.
A screen that scans
Posted February 23, 2006
You know the old joke about the computer user complaining about how their new fax-modem is a piece of crap? "I hold the paper up to the screen and press 'send', and nothing happens!"
Well, Toshiba is working on making that a reality. They have a demonstrated prototype display which integrates a bunch of photosensors right among the LCD display elements, so the same screen can both produce an image as well as capture the image of anything pressed up against the screen, at the same resolution. They actually demoed this a couple of years ago, but it showed up again in a recent SID article, which is how it caught my eye now.
I'm a little skeptical of the details of this, since ordinarily an LCD has a layer of plastic or glass on top of it, which would seem to inhibit the ability to get a good pixel-accurate scan without some extra optics: each sensor would normally be covering some larger angle. But it's just a prototype, so maybe Toshiba left off the cover. You could also use some molded lenslet arrays without a lot of trouble to improve things.
Given that this is somewhat old news, I'm kind of disappointed that nothing has apparently come of the technology since then. I guess with the ubiquity of cheap digital camera chips, there isn't much demand for tech which does the same thing only with less flexibility. Still, it would be cool to have the option, especially for, say, a super graphics tablet/display. At a minimum you can use the same technique to emulate a touch-screen by detecting the shadow of a finger or stylus, and at higher resolution you can probably do a fingerprint scanner as well.
Paradigm shift: $1 versus $2
Posted February 14, 2006
Yesterday I had the pleasure of attenting a talk given by Cory Doctorow, sci-fi author and internet freedom-fighter. At MIT he was mainly preaching to the choir, of course, so his talk was interesting without being revolutionary. However, he did drop at least one nugget of info that I hadn't run across before, one with profound implications.
Check out this interview with Brewster Kahle sometime about the digital Bookmobile project, a mobile print-on-demand library.
But all you need to know, and all Cory had to drop to make me sit up in my seat, is this factoid: the cost to print a book on demand is now about $1... the cost for a library to lend and reshelve a book is about $2. Ponder that for a second. Due to the onrush of technology versus the relatively static pace of human labor capacity, it can now be cheaper for libraries to give books away than to ask for them back. The innate copyability of bits is coming to the dead-tree world.
Speaking as a volunteer librarian myself, and moreover as one who is constantly trying to stack books higher and higher in limited space on the off-chance someone will want to borrow one... this change can't come too soon.
If only the copyright laws were ready for it.
3D display with dual backlights
Posted February 5, 2006
There's all sorts of interesting things going on with display technology, so to keep with the backlighting theme of my last entry, here's another innovation along those lines. This patent is the best description I could find of a neat technique for doing 3D displays with only minimal modifications to existing LCD panels.
To make a 3D display generally boils down to somehow presenting a different image to the left and right eye of your user. If you can do that by any means, either by having the user wear polarizing glasses, by putting two different displays right in front of his eyes inside a VR helmet, or by clever optical systems, you're golden, and the rest is just making those two views synthesize a three-dimensional scene.
So in this case, the method chosen is to have two backlights for your LCD panel. Each one shines horizontally from the edge of the screen onto a separate layer of little microprisms which bounces the light up through the LCD panel, but at a small angle. So one backlight illuminates a little to the left, and the other illuminates a little to the right... with the user directly in front of the display, that means one is shining into his right eye and the other his left.
Flash the two backlights alternately and have your LCD panel present the two eye views sequentially, and poof, you're done: you're giving the user's two eyes two different images. Granted, the user has to be in exactly the right place for this to work properly, but the huge advantage is that this is essentially a drop-in replacement for existing backlight assemblies, so you can turn a cellphone display into a 3D imager without a lot of hassle... aaand if you want to drop back down to normal 2D behavior without restricting the user's positioning so much, you just turn both backlights on and run the screen normally.
Scanning backlighting for fast motion
Posted January 28, 2006
Philips has a nice technology they are using in their newer LCD television for getting crisper images during high-motion scenes. It's called ClearLCD or Aptura, and the way it works is that they illuminate only certain horizontal bands of the screen at a time, matching the way the video image is scanned.
Ordinarily the entire display is refreshed every 1/30 or 1/60 of a second, and is illuminated the entire time, so that the eye tracking a moving object will find it smeared across that length of time: even if the individual frames on the DVD are crisp, the eye is moving continuously and doing the actual smearing.
With Philips' system, each pixel is illuminated during a briefer period, although with a brighter backlight to make the overall illumination the same. In this case, the user's eye gets a crisp image "flashed" onto the retina, which persists until the next crisp frame in the new location. Since the intervening interval while the eye moves is dark, there's nothing to smear.
This has an additional bonus due to the way LCD panels behave: there's a finite time for a pixel to change from white to black, and if the backlight is illuminated all the time, you see that messy transition. With the new backlighting system, however, you can time it so that sections of the screen are totally dark until the pixels are "ready" with the new image.
It's really a win-win idea, with the only downside being that you have to have backlights which react quickly (i.e. don't need to warm up or anything) and which have peak brightnesses several times what is ordinarily required. They seem to have solved that, but even in the worst case you can gracefully degrade to the old behavior by keeping the backlights on for greater and greater portions of the time if you really need high brightness for certain scenes or situations.
8 teraOPS electro-optical DSP
Posted January 20, 2006
Perhaps it's too good to be true, but the technology isn't impossible, merely a little overly specialized. If you want a chip that can churn through 8,000,000,000,000 operations per second, you should have a look at Lenslet.
The way this works is to convert a computational problem into a physical problem: a vector-matrix multiply turns into a vector of lasers, a matrix of light modulators, and a vector of photodetectors. Each laser illuminates a row of the matrix, and each photosensor reads off the sum of the modulated light from an entire column. All the actual multiplication and addition is performed by the photons as they are absorbed by the modulator and shine additively on the detector.
Of course, the catch is (a) you don't have a universal computer at that speed which you can program, merely a DSP that can accelerate certain types of mathematical operations, and moreover ones of particular size ranges, and (b) the answers you get are not guaranteed to be correct, since they are readouts of physical quantities subject to instrumental noise, rather than digital computations with error-correction at every transistor.
Still, for some classes of problems, this is perfect: a lot of the time, you're willing to accept an imperfect answer now as opposed to a perfect answer once the missile you're tracking has already flown past.
Using buttons for wearable components
Posted January 13, 2006
Continuing my previous entry on ISWC 2005 papers, the other interesting poster I saw in the proceedings was this paper (sorry, I can't find a free full-text link anywhere) from Hannaikainen, Mikkonen, and Vanhala.
The problem they address is that of true wearable computing, i.e. integrating computers into clothing. You can do a lot of interesting stuff with conductive thread, but unfortunately you often run into the fundamental problem that silicon chips are hard and inflexible whereas clothing is soft and malleable (and washable!). Often people have used snaps to attach hard components to a fabric temporarily, which has the advantage of allowing you to move your expensive computing components from shirt to shirt each day. For smaller, simpler devices this isn't very friendly, however.
The obvious (in retrospect, of course) solution they came up with is to stuff the chips into the various buttons and other hard pieces of plastic and metal that clothes already have. This also provides an easy mechanism to tie the chips into the data bus of the clothes by using conductive thread to attach the button itself. With appropriate packaging, I don't see any reason why such sealed buttons wouldn't be able to survive a wash cycle, too (assuming you powered your shirt down first, heh heh).
Infrared retroreflectors for location marking
Posted January 12, 2006
I got my hands on the proceedings from the ISWC 2005 conference, and while not a lot blew me away in wearable computing developments, there were a couple of neat ideas worth mentioning. A poster from Nakazato, Kanbara, and Yokoya addressed the problem of indoor location-sensing and general tagging of objects.
Without a view of the sky, GPS is problematic, so the Media Lab group typically used little Squirt tags, which were small active infrared transmitters that periodically chirped a unique identification code and then went back to sleep to conserve power. If you were in the vicinity (generally they were mounted on the ceiling, so if you were standing under them), your wearable could easily read off your location from the known tag codes in a second or two.
The infrastructure for this was annoying, however: either you had to wire them into a permanent power system, or go around and replace the batteries every couple of weeks. Bad.
Nakazato et al. have a passive approach instead, putting patterned translucent retroreflectors on the ceiling. To the naked eye they're effectively invisible, but if your wearable has an upward-pointing infrared LED and camera, they shine like beacons in the reflected light, and some simple image-recognition can extract a tag value. No infrastructure costs beyond one-time plastering of the retroreflective placards. Nice.
Mixed-signal FPGA
Posted January 4, 2006
Happy New Year!
To start off the new year, here's a nifty new family of chips: the Actel Fusion FPGA. Although they have some nice secondary features too, their killer characteristic is that they include some analog functionality, allowing a lot of secondary chips to be integrated right into the main FPGA.
Specifically, they have a nice multiplexing ADC with extensive prescaling options to read analog signals between -12 and +12V... the analog output feature is less interesting (where's the DAC?!), but does include some current-driving capabilities. If they wanted to really impress me, they'd have also shoved in some generic high-speed op-amps inside there as well, but as it is the chip is sane: they were, I think, mainly aiming for a system-monitor feature set, such as reading voltage and temperature measurements and adjusting fan speed, etc. For that sort of use, this is perfect, whereas personally I would be more interested in a comprehensive analog/digital signal processing chip, which would need more of an ability to compute with and output analog signals.
Some of the secondary features are neat too. The SRAM blocks include some extra circuitry to use them as FIFO queues with no user logic needed; pins can be assigned to different blocks for interfacing to chips running at different voltage levels; and most interestingly, there is a hardware AES encryption engine allowing the FPGA code to remain secret without a massive reverse-engineering effort... they even arranged it so that you can update the chip remotely with encrypted code over an untrusted channel like the internet. Nice!
Add a free IDE supporting Linux and a reasonably-priced ($350) hardware starter kit, and this looks like fun...