Solar Cells and Photonic Crystals

A place to discuss solar cells and photonic crystals, both in theory and experiment.

Monday, January 24, 2011

cold fusion claims resurface

Looks like cold fusion is one of those great lies that will never be truly put to rest. Two individual researchers, Andrea Rossi and Sergio Focardi, have announced that they have the ability to fuse hydrogen and nickel into copper, and are able to use 400 W of input heat in a room-temperature reactor to generate 12,400 W of output heat as steam, for a 31x return on energy (possibly 8x in commercial applications). Furthermore, they generate no radioactive waste products. Is this really possible? In a word, no.

1. It is basically impossible to get room-temperature nuclei to fuse at an appreciable rate. That is because of the electrostatic repulsion between two positively charged nuclei: a phenomenon known as the Coulomb barrier. The repulsive energy is proportional to the product of the charge of the two nuclei. That means that fusion between hydrogen and nickel is much less likely than between two hydrogen atoms. Empirically, it has been observed that hot fusion reactors such as the sun tend to favor the fusion of light elements, such as hydrogen, carbon, nitrogen, and oxygen. Usually heavier elements are produced primarily by very large and hot stars, at hundreds of millions of degrees kelvin.

2. The nuclear reaction they propose is wholly inconsistent with their claimed observations. The primary naturally occurring isotopes of nickel are nickel-58 (68.3%) and nickel-60 (26.1%), while the primary naturally occurring isotopes of copper are copper-63 (69.2%) and copper-65 (30.8%). However, if you fuse a single proton with nickel, you should produce primarily copper-59 and copper-61, as well as gamma rays. Both resultant nuclei are highly radioactive, with half-lives of 117 seconds and 3.33 hours, respectively, with correspondingly high specific activities. They decay via electron capture and possibly beta+ decay (depending on the relative energies of the two nuclei), thus emitting neutrinos and possibly positrons (anti-matter). Furthermore, the daughter nucleus, nickel-59, is also radioactive over a much longer time scale, with a half-life of 110,000 years. In short, if their reaction works as claimed, they should be producing highly dangerous gamma rays (some possibly produced around 511 keV via intermediate positron-electron annihilation), as well as leaving a highly toxic nuclear waste pile behind.

3. Although it doesn't prove anything directly, it's suspicious that the researchers seem extraordinarily eager to circumvent the standard peer review process. Since they failed to publish in a normal peer-reviewed journal, they instead created their own online journal to serve as a vehicle for disseminating their results. And the information they provide in their own journal paper is very vague: they never give all the details of how their reactor is set up, and never quantify the nature of the copper products that were allegedly observed.

4. There are a number of potential alternative explanations for their results that don't require rewriting the laws of physics, such as the possibility they are engaging in standard chemistry. As I first saw pointed out on slashdot.org by number6x, if the electrodes have oxidized nickel on the surface, they can be reduced by hydrogen gas in an exothermic chemical reaction that yields elemental nickel and steam. The presence of copper can be explained by noting that it may have been present all along (nickel and copper commonly are found together in mines). This implies that the authors have essentially discovered a somewhat dull battery chemistry.

Labels:

Friday, April 24, 2009

fusion

I know this might seem slightly off-topic, but I'm almost dumbstruck by all the recent stories on fusion power. First of all, there was a 60 Minutes story (link), which claimed that since Pons and Fleishmann's original work 20 years ago, which was wholly discredited, significant breakthroughs have been made. However, this doesn't stand up on closer examination. First, there are no theoretical grounds for these claims. Nuclear fusion requires a tremendous amount of energy for a simple reason: every nucleus has a positive electric charge, and nuclei strongly repel one another before they get close enough to fuse, because of Coloumb's law. Putting hydrogen on a palladium matrix, as all these experimentalists do, has virtually no impact on this basic fact. Rather, very high temperatures are needed to allow nuclei to overcome the potential energy barrier. The theoretical explanations I've seen advanced to explain why cold fusion should work are also complete nonsense and are contradicted by the entire field of quantum mechanics. But OK, let's suppose that maybe this is just a novel phenomenon that our theories can't explain yet (even though that's extremely unlikely because we've probed basic physics through careful experiments, up to temperatures of trillions of degrees celsius). In that case, there are still a number of issues with the results. First, consistency with what we know from previous fusion experiments. If fusion takes place, all the following byproducts would be expected: high-energy gamma rays, helium, tritium, and free neutrons. It seems like people have made claims to see possibly all of these components at various times, but never all at once, and in the right ratios. That's a HUGE red flag. Second, that leads me into issues with reproducibility. The "best" cold fusion researchers get about a 70% "success" rate. However, it really ought to be a deterministic process, also reproducible in other labs. The researchers mutter vague statements about the palladium not being cleaved right... but honestly, it sounds to me like an excuse for the embarrassing fact that independent labs can't reproduce their results. The third issue is the lack of consideration of alternatives. The one thing that researchers consistently claim to see is excess heat, evolved over a period of weeks, if not months. However, small errors in the calibration, or simple chemical reactions, could produce identical measurements. Calibration errors or chemical reactions with impurity would also explain the unpredictability of the results. In short, there are a number of alternative, much less exotic explanations for these measurements, fully consistent with the laws of physics and chemistry as we understand them. Extraordinary claims to the contrary require extraordinary evidence, which is WHOLLY lacking here.

Another thing I've seen recently are a number of groups claiming they're going to get hot fusion going as an energy source. First, there was an article by Thomas Friedman in the New York Times (link); more recently, there's a startup, Helion Energy, claiming that it'll just take a bit of VC money and start generating fusion power for cheap. Now, I don't want to lump them in with cold fusion, as they're on solid ground in terms of the basic physics. However, there are still major problems. First of all, in order to get hot fusion going, you need to heat up the plasma tremendously, and at the same time, confine it to a small volume, since it has a naturally tendency to expand into its surroundings. All of this takes energy. A LOT of energy. In fact, there hasn't really ever been a fusion experiment where more energy came out than went in. The closest to date was the JT-60 tokamak in Japan, in which certain channels produced more energy overall than they took in for a short time. However, despite its impressive accomplishments, overall, it's a net energy sink. There are a number of other issues: controlling the plasma is very difficult, mostly due to our lack of complete understanding of the detailed behavior of plasmas (it isn't an easy problem); massive irradiation resulting from production of tritium and neutrons (you might recycle the tritium, but recycling neutrons is mostly a non-starter); not to mention practical difficulties in building and operating infrastructure big and reliable enough to keep fusion going for enough time to make the investment worth it. In short, hot fusion has theoretical potential, but that's all it has for now. A lot of work needs to be done before we take it seriously as an energy source. In the meantime, the basic science is important, and ought to be supported for its own sake. And who knows, maybe there will be a breakthrough eventually to make this sort of thing practical. But it certainly won't be cheap: the NIF has cost at least $4 billion, and the ITER experiment is projected to run $17 billion over 30 years, in a field notorious for tremendous cost overruns.

In summary, cold fusion is almost certainly impossible, and hot fusion is almost certainly impractical. I highly doubt these things will change soon enough for us to put off other low-carbon energy research, so I'd encourage everyone reading this article to keep it in mind when prioritizing energy funding (from either private or governmental choices). And full disclosure: in case you don't know, I'm working on solar energy.

Wednesday, February 11, 2009

A few different solar growth scenarios

Just a quick follow-up on the last post -- you can see the executive summary of Michael Rogol's predictions, including 52 GW of production by 2012:

http://www.photonconsulting.com/solar_annual_2008.php


By comparison, the Prometheus Institute projects 24 GW by 2012:

http://www.pv-tech.org/news/_a/greentech_media_sees_increase_
in_global_module_output_to_23.7gw_by_2012/


And Lux Research projects 21 GW by 2012:
http://earth2tech.com/2008/03/20/solar-bubble-to-burst-next-year-report-says/

Tuesday, February 10, 2009

Bold Solar Growth Projections

The solar industry has been growing at an astounding pace this decade, but can the growth continue or even accelerate further? Since 2000, the solar industry has been growing at over a 30% annual rate, and reached an estimated 5.4 GW of annual production in 2008.


Historical growth of the PV industry (measured in MW of annual production), and one potential growth path through 2015. Note the logarithmic scale of the y-axis.

However, a lot of pessimism has affected the solar industry recently, with concerns about dropping fossil fuel prices and disappearing financing. However, Michael Rogol expects the solar industry will reach 52 GW of production capacity by 2012, corresponding to an annual growth rate of 75% over the next 4 years.

What is his reasoning? He made a compelling case in a recent speech at MIT, which focused particularly on the demand side. His argument is that there are two major categories of demand: big, utility-scale developments, and small, rooftop-sized ones. Now, while utility-scale developments have slowed down due to financing issues, and consumed most of the headlines, he argues that a veritable armada of small-scale installers are ready to pick up the slack. Furthermore, a small drop in prices, combined with increasingly generous subsidies in the US, Italy, and Australia, to name a few, are inducing the creation of an unbelievable number of small businesses, dedicated to rooftop installations. Furthermore, he argues that the financial crisis, memories of recent oil price surges, and environmental concerns have increased small investors' interest in acquiring hard assets such as solar installations.

On the supply side, he feels that the profit margins for the leading players in the solar industry have been so strong that they've driven very rapid growth via reinvestment of extra cash in the firms' core business. First Solar is the perfect example: since their IPO in 2006, they doubled capacity in 2007, and did it again last year.

How will we know whether Rogol's right? It's pretty simple, actually. If large factories producing cost-competitive solar cells keep going full steam ahead, without massive inventory build-ups, then it's pretty likely that the solar slowdown has been overestimated.

Labels: , , ,

Tuesday, November 18, 2008

Is the solar industry doomed?

Is the solar industry doomed? According to some observers, the answer may be yes. For example, see this article in seekingalpha.

However, I feel that article is fairly superficial and alarmist. Certainly, the factors cited represent a downside risk, but taken in a larger context, are not nearly as bad as represented, in my view.

First of all, he claims no one care about solar with oil at $60 / bbl. However, oil is still at fairly high levels by historical standards (compare prices hovering around $20 / bbl for most of the '90s).

Second, he cites a random statement by a scientist at Los Alamos about a nuclear shed that might create a lot of power within 5 years. However, that by itself does not represent a comprehensive energy solution that excludes solar. There is no product available for sale, and there are no statements about costs, feedstocks, operational safety, or waste disposal.

Third, he worries about pullbacks on subsidies in Spain, and a failed extension in California. While Spain has capped its subsidy program, it may be extended later, and many other countries, including Italy and the US, are considering more subsidies.

Fourth, he cited a slight drop in 6" wafer costs from $12 to $9. However, that hardly means that thin films are uneconomical. Even at the reduced price of $9 per 6" wafer, the wafer costs ALONE would be $3.29 per watt at 15% efficiency. That's much more expensive than First Solar's total cost of sales -- around $1.25 per watt. Not to mention the fact thin film companies are rapidly seeking ways to drive down the costs even further.

Fifth, he alleges solar factory utilization will fall to 56%. However, this number is extremely speculative and doesn't reflect the current state of the market, which corresponds to strong utilization and pricing reflective of a seller's market (see, e.g., solarbuzz.com 's solar PV module retail price survey).

Sixth, the argument can be turned the other way to argue that US and Japanese consumers can buy solar more cheaply than before -- is there a reason why only Germans would buy such cells? The current sales data doesn't support such a hypothesis: "Germany, Japan, and Spain rank as the top markets for solar power, but other Western European nations are coming on fast, as are China and the U.S."

In conclusion, I would encourage everyone to look at this industry carefully and avoid knee-jerk reactions to isolated headlines.

Thursday, May 25, 2006

gratings in 1D and 2D

After a short hiatus caused by the end of the semester, I'm back to blogging! And more specifically, back to discussing light trapping in solar cells.

One interesting issue that arises in designing diffraction light trapping is the choice of periodicity. In a 2D world, only one value must be chosen. It is set by the desired diffraction limit -- for a solar cell, this would generally correspond to the upper end of the absorption spectrum (where the greatest gains are possible). However, the real world is 3D, so one must choose two periods. The easiest choice is to have both be the same as in the 2D world. However, it's not necessarily the case that such a choice is the optimum or even really that close to optimum. Ideally, two sets of peaks would be created by the two gratings, and not overlap with each other. However, since the spacing is not constant, that's probably not realistic. So the 2D case will generally entail less than 100% enhancement compared to the 1D grating. Also, keep in mind that only 800-1100 nm is targeted for enhancement, so the second period must be at least 800/1100 = 73% of the smaller period. Right now, I'm looking into what relative periods are optimal, and how that's influenced by various factors such as the natural absorption length and material thickness. Hopefully I'll have some results on this idea soon!

Friday, May 05, 2006

solar cells #2

I've been thinking some more about the solar cell problem. The most recent question asked by one of my colleagues was, what happens if you have a conventional light-trapping scheme? It seems reasonable to compare that with our problem in the context of the same situation if possible. One challenge is that in the geometrical optics picture, any direction of light propagation should be permitted; whereas in the crystalline picture, there are only certain wavevectors allowed by conservation of crystal momentum. If one starts with wavevector k, one can only add and subtract reciprocal lattice vectors: i.e., k->k+G. The solution is to start off with a large cell in the direction of periodicity, and increase it until the solution converges for a large enough period. So, I recently performed this test, and found that texturing for the optimal angle (16 degrees for normally incident light) actually yields similar performance as the photonic crystal. BUT, the good news is, that it seems that the two can be combined to yield a greater performance. For a 8 micron thin silicon crystal, I found that 1D texturing and a 2D photonic crystal both yield an enhancement of 10% -- but combined, they yield a 15% improvement. This could have important implications for the implementation of this system in solar cells.

I gave a talk today at the MIT Center for Integrated Photonic Systems meeting in which I discussed my results for several different cases, which you can download here.