Part of the  

Chip Design Magazine


About  |  Contact

Posts Tagged ‘quantum’

Game Over? – IP beyond Moore’s Law

Thursday, February 23rd, 2012

Will the creation of a repeatable, single-atom transistor mean the end of Moore’s Law and IP as we know it?

Yesterday, every technology-focused website posted some news variation about the creation of the first, repeatable single-atom transistor. What does this event mean to the semiconductor IP community?

Scientists have created a working transistor consisting of a single atom placed precisely in a silicon crystal

 Scientists from the University of New South Wales created a single-atom transistor using a precise and repeatable technique. The key word is “repeatable” which was achieved with the help of a scanning-tunneling microscope (STM). Using the STM, scientists were able to precisely manipulate hydrogen atoms around a phosphorus atom on a silicon wafer.

Although repeatable with great precision, this achievement does not mean the process is commercially viable – at least, not yet. Nevertheless, this breakthrough may accelerate the end game for Moore’s Law. Beyond the single atom lies the world of quantum computing, one which will change the way that chips are designed and manufactured. We’ll examine the quantum computing aspects of this achievement in another blog.

What does it mean to have a single-atom transistor? To answer that, we need to remember that a transistor – regardless of its size – is a device that amplifies and control the flow of an electrical current. When arranged in the proper configuration, transistors can form the very complex logic circuits that are the foundation of today’s computer systems.

Transistors where once the size of vacuum tubes before the creation of solid state manufacturing technology.

The University of New South Wales device meets the definition of a transistor but with one serious restriction. Their single-atom transistor must be kept as cold as liquid nitrogen, or minus 391 degrees Fahrenheit (minus 196 Celsius). According to the scientists, the atom sits in a channel. The flowing electrons must stay in the channel for the transistor to operate. If the temperature rises, then the electrons will gain more energy and move outside of the channel.

In theory, a logic circuit formed from single-atom transistors would be incredibly small. To put this in perspective, Intel’s latest chip, called “Sandy Bridge,” is manufactured with roughly 2.3 billion transistors spaced 32 nanometers apart. A single phosphorus atom is just 0.1 nanometers wide, which would result in incredibly small processors and thus the resulting electronic systems. But as Moore’s’ Law reaches the single-atom stage, look for even greater problems with leakage power and performance.

IP for Single-Atom Transistors?

What does an atom-sized transistor mean to the semiconductor IP community? We can understand the affect by extrapolation from the experience of the last several decades of Moore’s law. IP design and manufacturability is closely tied to the physical contrast and materials of the actual chip.

In a past blog, Neil Hand, group marketing director of SoC Realization at Cadence, explained that IP has always been tied to the manufacturing process and is becoming even more closely tied because of the move to more advanced geometries. He was quick to point out that the relationship to the process differs depending upon the nature of the IP, i.e., whether it is soft or hard IP.

Soft IP isn’t as closely tied to the underlying process as hard IP. Still, an understanding of process capabilities does allow the IP architecture to be optimized for better performance and power. Leveraging the process benefits means that the same IP can be used on different platforms—just as the same video-decoding/decompression IP can be implemented in everything from handsets to home theaters.

On the other hand, hard IP has a tightly coupled architecture that’s determined by underlined process capabilities and physical properties, Hand explains. “It would be impossible to divorce hard IP from the process. As a result, IP companies will be required to have deep in-house process expertise.”

Perhaps this is why IP giants are aligning themselves more closely to the manufacturing process. For example, ARM’s recent acquisition of Prolific, a chip design services company, should strengthen ARM’s physical IP position – including logic, embedded memory and interface cores – at the more troublesome lower nodes.

Will single-atom transistor architectures mean “game over” for Moore’s law? Probably not, since Moore’s law is an economic prediction, not a scientific theory. One way to ensure the continuation of the law is to reduce manufacturing costs. The emerging trend of using 3D layered ICs at existing or even higher process nodes will help reduce these costs while maintaining performance, at least with respect to factors such as device battery life, screen size, weight and others.

An accurate, repeatable process for creating single-atom transistors will bring significant changes to the world of chip design. Semiconductor IP, especially hard IP, is directly affected by any manufacturing changes. Further, IP will need to evolve in other ways, as it current is doing to accommodate 3D die stacking at existing nodes.

Is the IP community up to the change? If the past is any indication, then the answer is definitely – game on!

Free counter and web stats

Time Cloak for Digital Logic?

Friday, January 6th, 2012

Time cloaking has been demonstrated using light waves. What might that mean for particle models, as in IC applications?

Here’s a mental exercise for circuit designers. It evolves the application of a time cloak to electron particles in a digital circuit. But first, a bit of background information might help.

 Temporal cloaking allows researchers to change the perception of time. I reported on this amazing experiment last year.  (see, “Time Travel is Out: Stopping Time is In”)  

 A team of physicists at Cornell University created a time gap by briefly bending the speed of light around an event – not an object. The experiment involved changing the speeds of different light waves. The gap lasted only 50 trillionths of a second. A scaled up version of this demonstration shows an art thief walking into a museum to steal a painting without setting off laser beam alarms or even showing up on surveillance cameras. 

The time gap was demonstrated through the use of light waves. But quantum phenomena can be modeled as either waves or particles. How would a time cloak work in a particle representation?

The key to the Cornell experiment was the changing speed of different wavelengths of light. A corresponding particle representation might involve changing the speed of  electron “particles.” But electron motion is at best a statistical measurement, if one applies Heisenberg’s uncertain prediction for momentum and position.

Before exploring this challenge further, one might wonder as to the practical use of time cloaks. What could they be used for? In a circuit, the faster flow of electrons might cause an unintended output from a given set of logic functions. This assumes that the transistors could switch fast enough to operate with higher speed particles. Silicon transistors may not work, but there is an alternative.

Recent reports from IBM show that graphene switches can reach speeds of 100 gigahertz–meaning they can switch on and off 100 billion times each second, about 10 times as fast as the speediest silicon transistors. That should be fast enough for our theoretical time cloak particle experiment.

The next challenge is to create a circuit with two logic flows – one for normal speed and another for faster electrons. The faster electrons would complete their logic functions before the “normal” logic was finished. To what end, you ask? Perhaps to completely disable the rest of the circuit? This might be a problem if the circuit was part of the communication system for a fighter jet.

Of course this scenario is not that new. Many have suggested that RTL could be added to circuits just prior to fabrication in a foreign foundry to achieve the same dangerous result. (see, “Foreign Fabs and Killer Apps”) But with a time cloak, the “hidden” circuit would not be hidden at all or even added in secret. It would be there for all to see but completely undetectable except when the “time cloaked” faster electrons were activated.

Unfortunately, this scenario of a particle-based, digital time cloak is fatally flawed. An astute first year student in engineering would be able to spot the flaw in short order. Can you?

I’ll post my answer in the next blog.

Nanometer Chips Blend Work on Quantum Affects and Light

Friday, January 28th, 2011

For all you science geeks out there, consider this blog as my chip and quantum computer design “weird science” update. [Weird because quantum mechanics is weird … cool weird, but still weird.]

Today’s electronics are based upon a flow of electrons. Conversely, quantum computers will use photons or particles of light instead of electrons. One of the first steps in building a workable quantum computer is to create an on-demand photon generator. Two recent papers by The National Institute of Standards and Technology (NIST) scientists define a mechanism for creating and delivering such photons. In their works, these scientist describe not only how to produce a steady flow of photons but also how to do so one at a time and only when needed by the computer’s processor. (see Figure 1)

Why build a quantum computer in the first place? The reason is simply that such systems could perform calculations that are impossible using conventional computers by, “taking advantage of the peculiar rules of quantum mechanics.”

For more developments on quantum computers, see: “Quantum Computers Move A QuBit Closer To Reality

Figure 1: Gated photon source starts with the bright green laser beam that strikes a crystal and is converted into pairs of photons (false colored blue here, it’s at the end of the red spectrum) and (in the infrared, false colored red here.) The “blue” beam is the herald channel, the “red” beam goes through a spool of optical fiber (right) to delay it long enough for the gate to open or shut. Credit: Brida, INRIM


Optical systems, like their quantum brethren, are also based on light. Since nothing that contains any information can travel faster than the speed of light, the medium makes an ideal candidate for high-bandwidth interface I/Os between deep-submicron CMOS chips. The curious thing is that deep-submicron chips are now designed at the nanometer level, where the effects of quantum mechanics begin to actively affect the flow of electrons.

Isn’t it interesting how the two worlds of quantum mechanics and light keep impinging on one another?

IMEC, the world’s leading-research center in nanoelectronics, has just launched a new research program aimed at high-speed, high-bandwidth optical interfaces for communication between chips. If anyone can shine some light onto this problem of achieving mind-boggling communication speeds between integrated circuits, it’s IMEC.

Figure 2: Silicon-photonics wafer processed at IMEC’s fab.

Human Intentions Affect Electronics

Thursday, November 18th, 2010

My original goal was to discuss a new type of human-electronic interface with John Valentine, CEO of Psyleron. However, the discussions quickly led into the mysteries of quantum entanglement, crowd sourcing, and even ghost hunting. What follows are excerpts of that conversation, which was conducted earlier this year.

Blyler: Originally, I was working on a story about futuristic man-machine interfaces, such as Emotive-style headbands, Microsoft Project Natal (now Kinetics) full-body cameras, and even Intel’s work on implanted control chips to control a mouse, etc. While doing research, I came across the Princeton Engineering Anomalies Research (PEAR) experiments, which led me to you. I’m curious about how your system works.

Valentine: The companies that you mention are much more into actual interface design than we are—companies like NeuroSky, Emotiv, etc. Most of them are using the electroencephalography (EEG) -type apparatus to measure electrical signals on the surface of the head. These techniques were first developed back in the ’50s and earlier, when they were used for biofeedback. Back then, many people were skeptical about the technology. They didn’t really believe that it could be done. Now it is totally mainstream and accepted.

We are still in the fringe bin. Our research comes out of the PEAR labs. The original experiments in the PEAR labs were based upon a student project to investigate how the mind might be able to influence the outcome of random physical processes. The student read about some work that was done at Boeing and elsewhere, which led her to approach Robert Johns, Dean of the Princeton University School of Engineering and Applied Science. He ultimately started the PEAR labs.

The first PEAR-lab pilot studies found that the outputs of the random device generators were skewed in the direction of the person’s intentions (see figure). To this day, we have no known mechanism to explain this phenomenon. Originally, researchers thought it must be electromagnetic (EM) in nature. They thought that the people (test subjects) sitting near the electronically based random generators were influencing the experiment with a conscious effort to produce EM signals, which would skew the output of the random generator device—not by much, only a little bit.

Figure: PEAR random event generator (REG) with display.

But in later experiments, we took great pains to make sure those devices were not susceptible to any EM influences. Today, after 15 to 20 years of testing, it is pretty clear that it would be impossible to get the results we were getting if this was electromagnetic.

Blyler: What did you do, shield the electronics in the random generator boxes?

Valentine: Yes, but we shielded them in extreme ways. Further, we processed the data in such a way that any straightforward EM effect wouldn’t get through. In the early days, they separated people from the device by huge distances, as EM effects fall off greatly with distance or one over the distance squared. This is definitely not the case in our experiments, which leads us to believe that whatever we are working with is something that is not understood—maybe not even known yet. What this phenomenon most closely resembles is a quantum entanglement, in which you have two related systems that communicate with each other at any distance. Some physicists would get mad about this possibility—although behind closed doors, they will admit it.

Blyler: How would you explain quantum entanglements?

Valentine: Basically, the common example of entanglement is a particle that has decayed into two pieces (photons) that go off in opposite directions. One has a momentum of +p while the other has a momentum of –p. Let’s suppose they have the same weight and other characteristics. In quantum mechanics, Heisenberg’s Uncertainty principle says that you cannot precisely know the position and momentum of a particle. These are basically two conjugate (oppositely related) properties, whereby the more information you have about one, the less you can have about the other. This is a strange but well-established relationship in quantum mechanics.

Let’s return to the situation of the decaying particle. Physicists tried to side-step Heisenberg’s Uncertainty principle by blowing up a molecule in such a way as to create two identical particles—one a momentum +p while the other had –p. Their intention was to measure the position of one of the particles and then measure the momentum of the other particle. In this way, they would precisely know both position and momentum. In other words, they would know everything about the system.

Everyone agreed that, if you only had one particle, there would be no way to gather all the information—which is possible if you know both the position and momentum of a particle. So with two identical particles, you could know everything. Classical physicists believed that Heisenberg’s Uncertainty principle was not a limitation of the physical universe, but rather a limitation of our measurement process. They would argue that, in the process of measuring the particle, we disturb it in such a way as to only get one property or the other (i.e., to measure either the position or momentum).

Entanglement theory would explain the inability to measure the position and momentum of two identical particles traveling in opposite directions by explaining that the measurement of the particle actually disturbs the measurement of the other—even if the other particle is never actually measured. Also, this is clearly an example where no EM communication is taking place, since the particles are too small and the entanglement phenomenon is not distance-dependent. The only thing that seems to link the two particles together is their initial conditions (in an entangled state).

Nobody really has a good answer as to why this happens. But it is a well-known phenomenon referred to as entanglement. It was a big problem for people like Einstein and others, who couldn’t believe that things like this happened. But empirically, it is a fact. No physicist would ever deny it today.

Our work falls into this category, for which there is not yet any physical explanation whatsoever. There is no reason as to why we should be seeing this mental influence on a random process.

Blyler: Do the subjects involved in your experiments wear any kind of sensor? Or does the person simply think in a certain way to affect the random-event generator?

Valentine: It is the latter. There is no physical connection between the person and the apparatus. In a sense, if you were to define a sensor, I would say that the random-event generator is the sensor. What’s happening is that the random-event generator is measuring these very-low-scale, random fluctuations in the circuit. In a sense, you could say that somehow, someone is affecting that process using intention or whatever it is.

Blyler: If you have an entire room of people with the same intentions, is the effect even greater than with a single user?

Valentine: That doesn’t appear to be the case. There is definitely not a linear scaling of the effect. However, we have seen some strange things happen in that regard.

In situations involving two or three people, the results can be improved or lessened, depending upon the types of people. For example, when two men try to work together to influence the random-event generator, the overall effect is null—worse than with just an individual person. We chalk it up to some kind of psychological thing, whereby the men feel a bit silly trying to effect a change in the electronic generator merely by thinking about it. They tend to feel very awkward and do worse than if acting alone.

On the other hand, romantically involved couples seem to do much better than any other group. So it is not really a function of the number of people as it is the psychology of the people involved in the experiment—the comfort level of those people.

We are working with weird stuff. We have a hard time finding any kind of strong physical correlation, although we have found many strange physiological correlations.

Blyler: Let’s talk about your business model, moving from basic research at the start to one of commercial engagement. Psyleron does have products. Is this revenue the main source of funding for future work? Or is it a marketing vehicle to get the word out? Which products are more popular?

Valentine: It is simultaneously a funding source and a research project, depending upon which products you’re talking about. We have three main products—each of which caters to very different types of people.

I used to work at the PEAR labs and the same people are still involved with Psyleron. A lot of research went on over a very long time at the PEAR labs. A lot of money was spent. Progress was made. But it also become incredibly clear that one lab, testing a few people here and there, wasn’t enough. At this stage in the experiments, we needed to do something else.

The idea behind Psyleron was to create products that were both grounded in research and allowed the public to try them out. We deliberately didn’t make a big deal about the products or the related phenomena. Instead, we simply make all of these things available so that people can give them a try. The results have been very interesting. For example, we heard back from people using the mood lamp, who report that the lamp does indeed change to the color that they want or desire.

Once people have these firsthand experiences, they seem to be more likely to talk about it to other people—including us. Many users report anecdotal experiences to us, which gives us far more information than we would otherwise have. For example, in the case with the mood lamp, we learned a lot more by providing the lab to the general public than if we would have brought a stream of people into the lab to take part in the same experiment. Also, this public-participation approach provides information for more formal research experiments.

Another example is a cell-phone-based service called Synctext ( This program is based upon some of the most far-out properties of the effect (interaction of human consciousness with physical devices) that we have found. This system allows us to constantly collect data from people as their pre-created messages are randomly sent to their cell phones. We don’t read their messages. In fact, we purposely made it so we don’t have access to their messages. Instead, we check whether or not the people that are reporting good results are getting messages more frequently than other people. We can look to see if there are correlations with time of day/year or if people who refer other people tend to be doing better than everyone else.

This data informs our thinking in a way that would never have happened if we were in a small university research lab.

Blyler: What is Synctext?

Valentine: It is a web interface that gives people access to a random-event generator in a remote location. Users first put in a list of messages that they might want sent to themselves or choose from a prepackaged list of messages. Each user has random-event-generated data being created for them all the time, from that moment on. At Psyleron, we can process that data to look for patterns. When certain things are detected in the user’s data stream, the system will automatically decide to send a message to their phone. It also picks the message that will be sent. If there were no effect (between the human consciousness and a physical device), then you would simply expect completely random messages to come to your phone at completely random times. But what we have found suggests that these people have a subconscious influence on their devices. Our original hypothesis was that, in such a system, it would make it quite possible that some people would get better results here than anywhere else. They would trigger the sending of appropriate messages to themselves.

Blyler: I assume that these messages are vague like fortune cookies, but with specific meanings. You wouldn’t want messages that would satisfy any occasion.

Valentine: We certainly give people the ability to define their own messages. For myself, though, I create dichotomous messages that have specific meanings so that, at a minimum, I would have a 50% chance of getting a yes or no. My goal was to make a quantifiable experiment. This works because of the probabilistic math behind this phenomenon.

You’re in a strange boat with this thing. For example, let’s say you regularly go to the gym three days a week from 8 to 9 a.m. You decide to create 10 message—only one of which relates to the gym activity. Perhaps it says, “Make sure that you work out hard at the gym.” The other nine messages have nothing to do with this event. Over the course of a month, you can see how often you receive messages related to that one-hour window at the gym. In this way, you come out with well-controlled statistics as to whether or not you received that message more frequently or not.

On the other hand, a lot of people are more interested in getting messages that are very appropriate, but without any desire to quantify the experience. So it’s up to the person’s subjectivity to decide. Different people use it for different reasons.

Blyler: There would definitely be a psychological component to the experiment, depending upon how you set it up.

Valentine: Yes, there are two basic competing factors—especially if you allow for the possibility that it works. First, people may get completely irrelevant messages that they assume are relevant—that is, they’ll find a way to make the messages fit. Or they will say the messages are irrelevant and not try to make them fit.

But if there is an effect, the challenge from a research standpoint is the lack of control in the experiment. In other words, people who are more open to a relevant connection may be more inclined to generate positive effects. But there will still be a lot of false positives—that is, people getting or interpreting messages as meaningful when they are not. As a traditional researcher, you need to find a way to measure and remove those false positives.

However, we are not looking at the data at that level of detail. We don’t have to worry about it. Instead, we caution people not to expect every single message to be relevant. In fact, given the sampling sizes that we typically see and the probabilistic nature of the system, we would be thrilled if users got one super-relevant message per day that is quantifiably significant. But when that happens—when people get one significant message per day—then they (the people) feel disappointed. The user thinks the system doesn’t work at all because they have a very deterministic mentality. It’s rather comical.

Of course, some people get freaked out with even one significant message a day and just quit.

Blyler: I wonder if these easily spooked users are the same ones who easily have paranormal experiences like seeing ghosts.

Valentine: Yes, it would be nice to know about some of the psychological relationships. That is really hard to predict. Some of the people really believe in this stuff. Others are interested, but feel like they shouldn’t be and get insecure about it. Or they don’t want other people to know they’re doing it. Or don’t want to admit that it is working for them. It’s interesting that many of those skeptics or closet users end up getting really good results. Once they think it over and feel less uncomfortable with the results (i.e., they adjust their emotional biases), they come back to us and enjoy the participation.

Blyler: Have you ever thought of using crowd sourcing for your work? It seems to me that you’re tending in that direction with the SMS-based mobile-phone messaging service.

Valentine: Crowd sourcing is a direction where I’d like to go because it is so much more economical and has so much more potential.

In the lab experiments, one of the biggest problems is bringing people in to conduct experiments. While they might generate great data, the natural inclination of the statisticians and the scientist is to replicate the experiment again and again. Eventually, the people get sick of participating.

Crowd sourcing might cause less fatigue among the people participating in the experiments. The people might want to do the experiments. They might even find it interesting rather than wanting to quit.

Blyler: Who are the competitors in your field?

Valentine: We started this company based on the random-event-generator product. The idea there was to enable researchers at other universities to conduct the same kinds of experiments that PEAR did, but in an inexpensive way. To date, I’d say that over 100 scientists in other university environments have acquired our hardware and software to see what it does. I’m not sure how many of these people are actually conducting experiments based on our system, but most seem to be exploring what it does. You might count these scientists as competitors.

But here is the funny part. Many professors and scientists look at our data and understand the results. They have even been to the PEAR lab and talked with us about our work. Many seem fascinated by our findings and want to conduct the experiments for themselves. But there is no real incentive for any of them to go public with their findings, as it might tarnish their reputations by becoming associated with ghost hunters or whatever else is out there. They know that people might relate to our work in that way.

In addition, there are no national scientific organizations or philanthropic groups specifically dedicated to fund this research. This is why most scientists will not stick their neck out to report their findings.

Blyler: You really speak with passion about the “silence” of the scientific community. Yet you’re a scientist. Why do you feel differently?

Valentine: It helps that I’m the CEO of the only commercial organization that investigates how human intentions can influence electronic devices at a quantum level. This has made me a bit more open source-ish about it (i.e., to get this phenomena out to the public). My hope is to get more scientists and others involved to generate some kind of progress on this topic.

Interestingly, I find that once scientists begin to get positive results from these experiments, they really are surprised. The first thing that pops up in their minds is, “My God, we’re onto something that no one else in the world knows about. Look what we’ve found.” However, rather than go public with their findings, many scientists try to keep it a secret—apparently in the hopes of getting funding to build some kind of super product based on their findings.

I don’t think that (i.e., building a super-product space around this technology) will actually happen. As a whole, we are too behind in understanding this phenomena. Further, having worked in this area for a long time, I have a pretty good idea of what other scientists are up to. They come to our center with a fairly secretive agenda. But after they ask me five questions or so, I know what they are exploring. In other words, most of the scientists are relating to this phenomenon in the same way, which is from the perspective of our current science and engineering paradigm. In the end, everyone asks the same kinds of questions and conducts the same kinds of experiments—all the while thinking they are doing something novel.

Having been through the same learning process, my colleagues and I already know what those results will be. It is frustrating for me because I’ll see other scientists spend a year or more of effort in secret to achieve results that are already known. Again, what is motivating these people is the hope of developing some new technology when we (at Psyleron) already know that what they seek won’t work.

It would be far better if these scientists would open up about their results. If a larger mass of people—specifically scientists—were comfortable doing that, then it would really open up the possibilities for much broader research in this topic.

Blyler: The open-source movement has really caught on in the software world and—to a lesser extent—the hardware world. Maybe open source is the answer for growing cooperation and progress in this area of research?

Valentine: I’m encouraged by the younger generation of scientists and engineers. They seem to relate to it with fewer stigmas than some of their elders. Maybe in a decade or two, as you say, this will be able to really pick up.

Blyler: On the software side and thinking as an entrepreneur, I could envision developing a game that works with this “intension-electronics” connection. Do you have a software-development platform that works with the random-event generator to create the next big gaming sensation?

Valentine: Yes, we are definitely moving much more aggressively in that direction. By next year (2011), we’ll have many more software offerings. That is the most economically efficient way to get the stuff out there. Now that we have a strong background with our systems in the real world, I think we are better positioned to try to create things that have value to people.

Blyler: If popular television shows are any indication of public interest, perhaps you should offer your system to ghost hunters.

Valentine: We do get e-mails from ghost-hunting people, who want to incorporate our technology into their apparatus. I’m not into that myself, so I don’t know much about ghost hunting. However, from my perspective, we do know that our devices respond to emotional states—to periods of heightened emotions. This means that people, in a ghost-hunting environment, are likely to say that they really have data to support the existence of ghosts. Unfortunately, since they are typically in spooky situations that would lead to higher human emotions, they might make a connection that may or may not prove the existence of ghosts.

Blyler: They were scared, which led to a reading that might be false. Your technology seems like it would fit on the fringe of many metaphysical activities. Isn’t that more of a curse than a blessing?

Valentine: It’s a dilemma with our field, but I’m willing to accept it because I’m interested in whatever data that I can get back. For better or for worse, there are so many people that have something like, let’s say, ghost hunting—basically some immeasurable, intangible something or other that they wish they could get some kind of quantifiable data about. We are working with a phenomenon that is currently unexplainable in scientific terms. But we know that it does respond to things that somehow relate humans and emotions to electronics at a quantum level. This means that people will definitely apply it to anything unknown. 

We don’t come out and say what our technology will or won’t respond to, so people will naturally use it for all kinds of experiments. Often, groups will use it to validate their expectations or experiences. In some cases, they will get positive results. But the challenge is exactly as I have said—namely, that there are too many compounding factors to say for sure that the results are driven by the one thing for which an experimenter is looking. We are not at a stage in our study of this phenomena where we can validate these specific claims.

Blyler: In other words, these group activities are not happening in a controlled experiment. This means that many other variables could be affecting the outcome.

Valentine: Yes. But more importantly, our work has shown that the observer affects the outcome of the experiment. This is the key challenge and the key difference between us and other technologies in this area (like EM bands, etc.). Most scientific experiments rely on the idea that the observer is independent of the experiment. If you conduct the experiment 10 times, you’ll get similar results to what I would get if I conducted it 10 times. In our case, we’re in a situation where, without being attached to any kind of apparatus, we know that people influence the devices. We know that the experimenters influence the devices. This is a problem for people who use our technology to validate a certain intention or belief. I would caution them that they influence the devices based upon what they are thinking, feeling, doing, etc.

So then, these people take it into an environment, hoping to generate a positive result for one thing or another (e.g., the presence of a ghost). But they have already influenced the experiment based on their intentions and emotions. Thus, they can get any confirming result that they want. At the end of the day, it’s not that they are lying about the data or even conducting a faulty experiment. Those things can happen. Rather, it is that they can influence the devices to produce a result along that line. This blows all types of scientific inquiry out of the water in this field because you can never be sure what drove the outcome.

Blyler: This supports the quantum-physics thought experiment of the dead cat (i.e., that at a quantum level, the observer affects the experiment).

Valentine: It’s interesting because certain very conservative, theoretical physicists are very uncomfortable with our work. Not only because it is out there, but more importantly because it does closely seem to relate to this phenomena to known quantum effects. While these people are comfortable with quantum effects, our work alludes to the idea that it has something to do with human intention or thoughts or something else.

The other area of discomfort is that in traditional quantum physics, which most people don’t even like that much anyway, there is no intention or choices. Things are much more random. But our experiments bring consciousness into the discussion. This makes most physicists very uncomfortable.

Secondly, our work implies that these effects can occur at a much more macroscopic scale than most quantum physicists would like. Even though we are measuring very small probabilistic processes, our measurements are still taking place on large-scale physical apparatus. That apparatus is attached to large computers, which are being influenced by a large group of people, etc. So that is uncomfortable.

But the comical thing about the macroscopic-level implications of our work is that every two or three years, in a physics journal, something will be published about so-and-so researcher establishing that quantum effects could occur on a slightly larger scale. This has been happening from the beginning, when these effects were only applied to quantum-scale interaction. Today, scientists are taking giant chains of molecules and saying that they can be entangled with one another. This trend shows that the boundary is being pushed further and further up the physical scale.

Still, most people are really uncomfortable with the idea that humans may somehow influence randomness. That level of discomfort may not go away until a new generation of scientists comes along who can be a little more comfortable with that concept.

Blyler: This has been a fascinating talk. Thank you.

Dark Energy’s Role in Cosmology and Semis

Thursday, November 4th, 2010

Semiconductor engineers, the makers of today’s tiniest integrated circuits that power our electronic world, know about the realm of the very small. Leading edge chip designers and manufacturers work with transistor structures that are a mere 10 nm wide and shrinking ever smaller. For reference, a hydrogen atom – one of the smallest of atoms – has a diameter of about 0.1 nm. In other words, a mere 100 hydrogen atoms occupy about the same length as 1 modern transistor. (Later it will become clear why I chose a hydrogen atom for reference.)

The world of nano-sized electronic circuits obey the laws of quantum mechanics, which are based on Einstein’s postulates of quantum physics. But what if something is fundamentally wrong with Einstein’s theories, especially as they apply to larger gravitational forces? This challenge to Einstein’s theories is not coming from the quantum world but from its big brother, namely, cosmology – the study of the universe, from stars to distant galaxies. By studying distant supernovae through the orbiting Hubble Space Telescope, cosmologists have found that the universe is now expanding faster than in the very distant past. This means that the universe has not been slowing due to gravitational forces as was expected. Rather, it has been expanding since its inception from the Big Bang. But what is causing the acceleration of the universe’s expansion? What does it have to do with Einstein and, indirectly, with semiconductor engineers?

The answer to these and other related questions was at the heart of a talk by renowned cosmologist Alex Filippenko, who spoke at a recent Institute for Science, Engineering and Public Policy (ISEPP) event (Figure 1 below). Filippenko’s talk, titled, “Dark Energy and the Runaway Universe,” drew a large audience to Portland’s Arlene Schnitzer Concert Hall. The high attendance was noteworthy because it occurred on Nov 2, 2010, the same night as the return on a very contentious set of national mid-term elections.

Those in attendance enjoyed a well spent evening. Although the subject matter was complex, Filippenko’s talk was anything but dark and somber. Instead, this lively speaker peppered his discussion with fascinating conjectures while tossing a Newtonian apple as a visual aid. To keep the audience awake, he occasionally injected a bit of comic relief, such as acknowledging the general public’s confusion between cosmology and cosmetology, the later being the study of hair styles and facials.

Dark Energy, Dark Matter

Although Filippenko’s talk was both entertaining and enlightening, his main goal was to discuss the existence of dark energy and dark matter. Both of these sinister sounding concepts arose as a way to explain this counterintuitive finding that the expansion of the universe is accelerating instead of slowing down. Theorists have created three possible explanations for the accelerated expansion.

First, the expansion might be due to a cosmological constant that fills all space with a constant energy density. Interestingly, this idea was proposed and then discarded by Einstein in his original theory on gravity. Secondly, perhaps the empty areas of space are really filled with some strange kind of energy fluid whose density can vary in time and space. Lastly, some theorists speculate that there might be something wrong with Einstein’s gravitational theories.

Nobody knows for certain which explanation is correct. But they have collectively decided to call the solution – whatever it turns out to be – “dark energy.”

Dark energy is a form of energy that exists in all space and increases the rate of expansion of the universe. According to calculations by cosmologists, 74% of the universe is comprised of dark energy. Another vaguely descriptive entity known as “dark matter” makes up 22 percent of the universe.  What is dark matter? In theory, it is real matter, not anti-matter. The idea of dark matter was created to explain discrepancies in the mass of galaxies, clusters of galaxies and the entire universe.

Indeed, both of these dark terms are used as theoretical “place holders” to help reconcile the measured shape of all space with the total amount of matter in the universe. This means that normal matter such as people, Portland, politicians, planets and everything else (the “etc.” in Figure 2 below) make up only a very small fraction of the Universe. Remember, the existences of dark energy and matter are needed to reconcile the measured geometry of space with the total amount of matter in the universe.

Filippenko’s lecture, including a question and answer session, lasted for almost 2 hours. In the end, he noted that “the wise, betting man would say that the universe will either expand forever or for an extremely (finite) long time. Nevertheless, it might collapse back upon itself.” This statement led into a philosophical observation that the famous poet Robert Frost must have known about these two possibilities. Either the universe would re-collapse, becoming hot, dense and compressed to end in fire. Or that the universe would eternally expand, becoming ever darker, more dilute and colder.

Fire and Ice – by Robert Frost

Some say the world will end in fire,
Some say in ice.
From what I’ve tasted of desire
I hold with those who favor fire.
But if it had to perish twice,
I think I know enough of hate
To say that for destruction ice
Is also great
And would suffice.

Questions at the Table

In the dinner that followed the public lecture (Figure 3 below), I asked Filippenko several questions. First, most of his findings and data seemed to come from the use of extremely powerful optical telescopes for the observation of very distant galaxies and clusters of galaxies. But what about use of radio telescopes, such as the one at Arecibo, PR? (see “Computational Powerhouse Hidden In Island Jungle”)

He was quick to point out the complementary role in cosmology that is played by radio telescopes, such as the capability to measure the 21cm hydrogen line (HI). This term refers to the electromagnetic radiation spectral line (1420.4 MHz) that is created by a change in the energy state of neutral hydrogen atoms. The HI spectral line is important in radio astronomy since it can penetrate large clouds of interstellar cosmic dust which would interfere with the visible light needed by traditional optical telescopes.

After answering this question, he noted that aging telescopes and laboratories – such as those at Arecibo – are casualties of our nation’s misplaced budgetary priorities. He was not shy in tackling such questions about the need for science from the audience, stating that all scientists must be able to defend the tax dollars that they receive. But that the cost of science, when compared to other expenditures, is very reasonable for the return.

He made several interesting observations during a brief dinner presentation. Most noticeably to me was that the angular diameter of the Moon is about the same as the angular diameter of the Sun (about 1/2 degree). This is why the Moon completely covers the disc of the Sun during a solar eclipse. However, in our expanding universe, the distance between the Moon and the Earth is slowing increasing over time. Thus, the angular diameter of the Moon is decreasing. In about 600 million years, the Moon will no longer cover the Sun completely and not total eclipses will occur. “We live in that vary narrow window of cosmological time (about 1 billion years) in which the Moon-Sun eclipses are perfect in size,” explained Filippenko.

Earlier in the evening, he had issued a challenge to everyone in the room to be cosmic observers of the “Path of Totality” in 2017. The path of totality, which can be up to 200 miles wide, represents the Moon’s shadow that is traced on the Earth during a total solar eclipse. The event in 2017 will occur in North America, visible from parts of southern Oregon.

It didn’t occur to me until my drive home that the last total solar eclipse in America occurred in 1979.  As a young undergraduate physics student at Oregon State University, I observed the eclipse just barely outside the Path of Totality. Still, the darkening landscape that resulted from the Sun’s disappearance behind the Moon was a haunting memory that stays with me still.

Cosmology and Semiconductors

Let me return to my opening question: What is the connection between an expanding universe, the possible failure of Einstein’s theories and semiconductor engineering? The answer lies in the study of quantum gravity, a field of theoretical physics that attempts to unify the very small with the very large. Quantum mechanics explains behavior of the very small, for objects no larger than molecules. General relativity works for the world of the very large, for bodies such as collapsed stars. But what happens if the explanation for a paradoxically expanding universe invalidates key portions of Einstein’s postulates in quantum physics? If so, then our understanding of both worlds may collide into chaos.

Maybe cosmologists will discover the answer by further studies of very distant and ancient galaxies. Or maybe semiconductor engineers will find an explanation as they push the boundaries of integrated circuits beyond the atomic level. Either way, it promises to be an exciting time for science and engineering.

Lynguent, Altos, InPA and Faust in Copenhagen

Friday, September 10th, 2010

The problem with being an editor in today’s market of disappearing publications and out-of-work colleagues is that you have far less time to cover all the good stories that are out there. I hope to address this problem – at least in a minor way – by briefly highlighting the companies and technology leaders that I talk with on a weekly basis.


You may recall that Lynguent was started in 2001 by Martin Vlach, a year after Analogy was acquired by Avanti. Martin helped start Analogy in the mid-1980’s. Like Analogy, the focus of Lynguent was in the vital but niche market of analog and mixed signal (AMS) simulators.

Today, Lynguent continues to accelerate the modeling and simulation of complex systems and circuits. So what’s new at Lynguent? A quick check of the company’s online press room reveals an increase in the number of new advisors. Jean Armstrong, PR agent for Lynguent, confirms that the company will have more personnel, technology and product announcements to make before the end of the year.

Altos Design Automation

You never know who you will meet on a flight from Portland to San Jose. On a recent trip to DAC, I found myself next to Kevin Chou, VP of R&D and Founder of Altos. Kevin and I talked about the evolving role of characterization in IP reuse. That conversation eventually led to a more detailed discussion this week with Jim McCanny, CEO and Co-Founder of Altos, an EDA company that provides characterization technology for IP reuse and more accurate modeling of timing, power and process variations.

I had erroneously assumed that Altos provided device characterization tools when, in fact, their technology is used for IC characterization, eg., standard cells, IOs and memories. What makes this technology interesting is that it helps to clarify the fuzzy design and manufacturing boundaries that become especially tricky to manage at 40nm process node and below. Jim explained that the traditional IC characterization via interpolative models add a significant amount of margins to the final model. These wider margins – added because of uncertainty – hurt the engineer’s ability to reach timing closure and manage power.


Even though ESL has lost some of its luster in the EDA world, the importance of virtual and hardware prototypes for chip design and verification has not lessened. A relatively new player to the market for FPGA-based prototyping is Integrated Prototype Automation (InPA) – pronounced “In-Pah”. The company was founded in October, 2007, by Thomas Huang and Michael Chang.

This week I spoke with Joe Gianelli, VP of Marketing and Business Development. Joe left Taray after its acquisition by Cadence earlier this year. No surprisingly, Taray provided tools to integrate multiple FPGAs into printed circuit board (PCB) system designs.

InPA’s goal is to replace blind probing with full visibility in the debugging of multiple FPGAs in either FPGA-specific designs or hardware-based ASIC prototyping. InPA’s patent pending active debug tool is the key in providing this accelerated and full visibility system. Joe noted that it works with Incisive, ModelSim, and VCS simulation environments, as well as with off-the-shelf and custom prototyping systems.

Coincidence, serendipity or merely quantum entanglements

- Faust in Copenhagen, the God Effect and conversations with Psyleron

On a more personal note, I wish to thank Andrea for sending me a copy of “Faust in Copenhagen – a Struggle for the Soul of Physics,” by Gino Segrè. The book deals with a faustian play that took place in the miracle year of 1932, the same year that witnessed the discovery of the neutron and antimatter, as well as the first artificially created nuclear transmutations. The play and these landmark events in quantum physics had one thing in common. Can you guess what that was? (No spoilers – you’ll have to read the book to find the answer.)

Apparently, my friend Andrea met the author on a recent sojourn to the desert. Knowing my continuing interest in quantum physics, she sent me a copy of the book. This is a rather curious coincidence (perhaps even serendipitous) mailing, since I had just finished reading Brian Clegg’s “The God Effect: Quantum Entanglement, Science’s Strangest Phenomenon.” I had read Clegg’s book as a result of an interview earlier this year with John Valentino, CEO of Psyleron, a start-up company that evolved from the Princeton Engineering Anomalies Research (PEAR) program of the last two decades. Prior to this year, I had not really stayed up with developments in physics for a considerable time.

The subject of my conversation with John Valentino was quantum entanglements and their measurable effects at the macro level, i.e., between human beings. A fascinating talk that I have yet to publish, but will do so soon.