Posted on Leave a comment

Cheap, high-octane biofuel discovered

Researchers from the National Renewable Energy Laboratory (NREL) have developed a cheap method for producing high-octane gasoline from methanol. They recently published their method in the journal Nature Catalysis. Methanol can be synthesized from CO2 via various routes, as we reported last year. Biomass, such as wood, is one possibility.

The production of biofuels from wood, however, is too expensive to compete with fossil fuels. To find a solution to this problem, the researchers combined their basic research with an economic analysis. The researchers initially aimed at the most expensive part of the process. Thereafter, the researchers found methods to reduce these costs with methanol as an intermediate.

So far, the cost of converting methanol to gasoline or diesel was about $1 per gallon. The researchers have now reached a price of about $0.70 per gallon.

The catalytic conversion of methanol into gasoline is an important research area in the field of CO2 recovery. The traditional method is based on multi-stage processes and high temperatures. It is expensive, producing low quality fuel in small quantities. Thus, it is not competitive with petroleum-based fuels.

Hydrogen deficiency was the initially problem the researcher had to overcome. Hydrogen is the key energy containing element in hydrocarbons. The researchers hypothesized that using the transition metal copper would solve this problem, which it did. They estimated that the copper-infused catalyst resulted in 38% more yield at lower cost.

By facilitating the reintegration of C4 byproducts during the homologation of dimethyl ether, the copper zeolite catalyst enabled this 38% increase in product yield and a 35% reduction in conversion cost compared to conventional zeolite catalysts. Alternatively, C4 by-products were passed to a synthetic kerosene meeting five specifications for a typical jet fuel. Then, the fuel synthesis costs increased slightly. Even though the cost savings are minimal, the resulting product has a higher value.

Apart from the costs, the new process offers users further competitive advantages. For example, companies can compete with ethanol producers for credits for renewable fuels (if the carbon used comes from biogas or household waste). The process is also compatible with existing methanol plants that use natural gas or solid waste to produce syngas.

Posted on

Semiconductive nanotubes with photovoltaic effect

Cost-effective and efficient methods for converting sunlight into electricity are the focus of green energy research. Solar cells developed for this purpose are currently made of semiconductors such as silicon. Electrical energy is generated at the junction between two different semiconductors. However, the efficiency of these solar cells has almost reached its theoretical limit. New methods of converting sunlight into electricity must be found if solar energy is to be used as a major source of electricity. An international research team from Germany, Japan and Israel has now made important progress in this direction. Zhang and colleagues recently published their findings in the prestigious journal Nature. They demonstrate a transition-free solar cell that can be made by applying a more atomic semiconductor layer into a nanotube.

In a conventional solar cell, two regions of a semiconductor are doped with different chemical elements. The electrical current is generated by the negatively charged electrons of a region and by the positively charged electron holes (holes). At the junction between these two areas, an electric field is created. When sunlight is absorbed at this junction, electron-hole pairs are formed. The electrons and holes are then separated by the resulting electric field, generating an electric current. This conversion of solar energy into electricity is called photovoltaic effect. This photovoltaic effect is particularly important for green energy production. Its efficiency has almost reached the theoretical limit as mentioned above.

In technical terms, the photovoltaic effect occurs at traditional pn junctions, where a p-type material (with an excess of holes) adjoins an n-type material (with an excess of electrons). Current is generated in the photo-induced generation of electron-hole pairs and their subsequent separation. Further advances are expected through the use of other photovoltaic effects that do not require transition and only occur in crystals with broken inversion symmetry. However, the practical implementation of these effects is impeded by the low efficiency of the materials. Semiconductors with reduced dimensionality or smaller band gap have shown to be more efficient. Transition metal dichalcogenides (TMDs) are, for example, two-dimensional small-bandgap semiconductors in which various effects were observed by breaking the inversion symmetry in their bulk crystals.

The reported bulk photovoltaic effect (BPVE) is based on tungsten disulfide, a member of the TMD family. Crystals of this material have a layered structure and can be stratified in layers similar to graphite. The resulting atomic sheets can then be rolled into tubes of 100 nanometers by chemical methods. The authors produced photovoltaic devices from three types of tungsten disulfide: a monolayer, a bilayer and a nanotube.

A systematic reduction in crystal symmetry has been achieved beyond mere fractional symmetry inversion. The transition from a two-dimensional monolayer to a nanotube with polar properties has been significantly improved. The photovoltaic current density produced is orders of magnitude greater than that of other comparable materials. The results not only confirm the potential of TMD-based nanomaterials, but also the importance of reducing crystal symmetry for improving the BPVE.

While the nanotube devices had a large BPVE, the single-layer and two-layer devices produced only a negligible electric current under illumination. The researchers attribute the different performance characteristics of the solar cells to their pronounced crystal symmetry. This way, one can spontaneously generate a current in uniform semiconductors, without a transition.

The BPVE was first observed in 1956 at Bell Labs, New Jersey, just two years after the invention of modern silicon solar cells. The effect is limited to non-centrosymmetric materials characterized by a lack of symmetry in spatial inversion. That is, the combination of a 180° rotation and a reflection. The effect has two attractive properties: the current generated by light depends on the polarization of the incident light and the associated voltage is greater than the band gap of the material. This is the energy required to excite conducting free electrons. However, the effect typically has a low conversion efficiency and was therefore of rather academic than industrial interest.

To achieve high efficiency, a photovoltaic material must have high light absorption and low internal symmetry. However, these two properties usually do not exist simultaneously in a given material. Semiconductors that absorb most of the incident sunlight generally have high symmetry. This reduces or even prevents the effect. Low-symmetry materials, such as perovskite oxides, absorb little sunlight due to their large band gap. To circumvent this problem, efforts have been made to improve light absorption in low-symmetry materials, for example by using the mentioned doping. Meanwhile, it has been shown that the effect can occur in semiconductors by using mechanical fields to adjust the crystal symmetry of the material.

The newly discovered solution is encouraging with regard to the production of high absorption semiconducting nanotubes. In the case of tungsten disulfide, the crystal symmetry of the nanotubes is reduced compared to the mono- and bilayers due to the curved walls of the tube. The combination of excellent light absorption and low crystal symmetry means that the nanotubes have a significant photovoltaic effect. The current density exceeds that of materials which are inherently low in symmetry. Nevertheless, the conversion efficiency achieved is still much lower than that of the photovoltaic effect in conventional junction-based solar cells.

The authors’ findings demonstrate the great potential of nanotubes in solar energy production and raise various technological and scientific challenges. From an application’s perspective, it would be useful to produce a solar cells that consists of a large arrays of semiconductor nanotubes to check whether the approach is scalable. The direction of the generated current would be largely determined by the internal symmetry of the material. Therefore, uniform symmetry across the nanotube array would be required to create a collective current. These currents could cancel each other out.

At Frontis Energy, we wonder if the method described could work with the classic photovoltaic effect in the same solar cell. That would possibly increase overall efficiency. The two effects could use the solar energy consecutively. Despite the remaining challenges, the presented work offers a possibility for the development of highly efficient solar cells.

(Photo: Wikipedia)

Posted on

Machine learning makes smarter batteries

Renewable energies, such as wind and solar energy are naturally intermittent. To balance their demand and supply, batteries of, for example, electric vehicles can be charged and act as an energy buffer for the power grid. Cars spend most of their time idle and could, at the same time, feed their electricity back into the grid. While this is still a dream of the future, commercialization of electric and hybrid vehicles is already creating a growing demand for long-lasting batteries, both for driving as well as grid buffering. Consequently, methods for evaluating the state of the battery will become increasingly important.

The long duration of battery health tests is a problem, hindering the rapid development of new batteries. Better battery life forcasting methods are therefore urgently needed but are extremely difficult to develop. Now, Severson and her colleagues report in the journal Nature Energy that machine learning can help to predict computer battery life by creating computer models. The published algorithms use data from early-stage charge and discharge cycles.

Normally, a figure of merit describes the health of a battery. It quantifies the ability of the battery to store energy relative to its original state. The health status is 100% when the battery is new and decreases with time. This is similar to the state of charge of a battery. Estimating the state of charge of a battery is, in turn, important to ensure safe and correct use. However, there is no consensus in the industry and science as to what exactly a battery’s health status is or how it should be determined.

The state of health of a battery reflects two signs of aging: progressive capacity decline and impedance increase (another measure of electrical resistance). Estimates of the state of charge of a battery must therefore take into account both the drop in capacity and the increase in impedance.

Lithium ion batteries, however, are complex systems in which both capacity fade and impedance increase are caused by multiple interacting processes. Most of these processes cannot be studied independently since they often occur in simultaneously. The state of health can therefore not be determined from a single direct measurement. Conventional health assessment methods include examining the interactions between the electrodes of a battery. Since such methods often intervene directly in the system “battery”, they make the battery useless, which is hardly desired.

A battery’s health status can also be determined in less invasive ways, for example using adaptive models and experimental techniques. Adaptive models learn from recorded battery performance data and adjust themselves. They are useful if system-specific battery information are not available. Such models are suitable for the diagnosis of aging processes. The main problem, however, is that they must be trained with experimental data before they can be used to determine the current capacity of a battery.

Experimental techniques are used to evaluate certain physical processes and failure mechanisms. This allows the rate of future capacity loss to be estimated. Unfortunately, these methods can not detect any intermittent errors. Alternative techniques use the rate of voltage or capacitance change (rather than raw voltage and current data). In order to accelerate the development of battery technology, further methods need to be found which can accurately predict the life of the batteries.

Severson and her colleagues have created a comprehensive data set that includes the performance data of 124 commercial lithium-ion batteries during their charge and discharge cycles. The authors used a variety of rapid charging conditions with identical discharge conditions. This method caused a change of the battery lives. The data covered a wide range of 150 to 2,300 cycles.

The researchers then used machine learning algorithms to analyze the data, creating models that can reliably predict battery life. After the first 100 cycles of each experimentally characterized battery their model already showed clear signs of a capacity fade. The best model could predict the lifetime of about 91% data sets studied in the study. Using the first five cycles, batteries could be classified into categories with short (<550 cycles) or long lifetimes.

The researchers’ work shows that data-driven modeling using machine learning allows forecasting the state of health of lithium-ion batteries. The models can identify aging processes that do not otherwise apparent in capacity data during early cycles. Accordingly, the new approach complements the previous predictive models. But at Frontis Energy, we also see the ability to combine generated data with models that predict the behavior of other complex dynamic systems.

(Photo: Wikipedia)

Posted on

Faster photoelectrical hydrogen

Achieving high current densities while maintaining high energy efficiency is one of the biggest challenges in improving photoelectrochemical devices. Higher current densities accelerate the production of hydrogen and other electrochemical fuels.

Now a compact, solar-powered, hydrogen-producing device has been developed that provides the fuel at record speed. In the journal Nature Energy, the researchers around Saurabh Tembhurne describe a concept that allows capturing concentrated solar radiation (up to 474 kW/m²) by thermal integration, mass transport optimization and better electronics between the photoabsorber and the electrocatalyst.

The research group of the Swiss Federal Institute of Technology in Lausanne (EPFL) calculated the maximum increase in theoretical efficiency. Then, they experimentally verified the calculated values ​​using a photoabsorber and an iridium-ruthenium oxide-platinum based electrocatalyst. The electrocatalyst reached a current density greater than 0.88 A/cm². The calculated conversion efficiency of solar energy into hydrogen was more than 15%. The system was stable under various conditions for more than two hours. Next, the researchers want to scale their system.

The produced hydrogen can be used in fuel cells for power generation, which is why the developed system is suitable for energy storage. The hydrogen-powered generation of electricity emits only pure water. However, the clean and fast production of hydrogen is still a challenge. In the photoelectric method, materials similar to those of solar modules were used. The electrolytes were based on water in the new system, although ammonia would also be conceivable. Sunlight reaching these materials triggers a reaction in which water is split into oxygen and hydrogen. So far, however, all photoelectric methods could not be used on an industrial scale.

2 H2O → 2 H2 + O2; ∆G°’ = +237 kJ/mol (H2)

The newly developed system absorbed more than 400 times the amount of solar energy that normally shines on a given area. The researchers used high-power lamps to provide the necessary “solar energy”. Existing solar systems concentrate solar energy to a similar degree with the help of mirrors or lenses. The waste heat is used to accelerate the reaction.

The team predicts that the test equipment, with a footprint of approximately 5 cm, can produce an estimated 47 liters of hydrogen gas in six hours of sunshine. This is the highest rate per area for such solar powered electrochemical systems. At Frontis Energy we hope to be able to test and offer this system soon.

(Photo: Wikipedia)

Posted on

Ammonia energy storage #3

As a loyal reader or loyal reader of our blog, you will certainly remember our previous publications on ammonia energy storage. There, we describe possible ways to extract ammonia from the air, as well as the recovery of its energy in the form of methane (patent pending WO2019/079908A1). Since global food production requires large amounts of ammonia fertilizers, technologies for extraction from air is already very mature. These technologies are essentially all based on the Haber-Bosch process, which was industrialized at the beginning of the last century. During this process, atmospheric nitrogen (N2) is reduced to ammonia (NH3). Despite the simplicity of the molecules involved, the cleavage of the strong nitrogen−nitrogen bonds in N2 and the resulting nitrogen−hydrogen bonds pose a major challenge for catalytic chemists. The reaction usually takes place under harsh conditions and requires a lot of energy, i.e. high reaction temperatures, high pressures and complicated combinations of reagents, which are also often expensive and energy-intensive to manufacture.

Now, a research group led by Yuya Ashida has published an article in the renowned journal Nature, in which they show that a samarium compound in aqueous solution combined with a molybdenum catalyst can form ammonia from atmospheric nitrogen. The work opens up new possibilities in the search for ways to ammonia synthesis under ambient conditions. Under such conditions, less energy is required to produce ammonia, resulting in higher energy efficiency for energy storage. In today’s Haber-Bosch process, air and hydrogen gas are combined via an iron catalyst. The resulting global ammonia production of this process ranges from 250 to 300 tonnes per minute, delivering fertilizers that provide nearly 60% of the world’s population (The Alchemy of Air, available at Amazon).

Comparison of different approaches to produce ammonia. Top: In the industrial Haber-Bosch synthesis of ammonia (NH3), nitrogen gas (N2) reacts with hydrogen molecules (H2), typically in the presence of an iron catalyst. The process requires high temperatures and pressures, but is thermodynamically ideal because only little energy is wasted on side reactions. Center: Nitrogenase enzymes catalyze the reaction of six-electron (e) nitrogen and six protons (H+) under ambient conditions to form ammonia. However, two additional electrons and protons form one molecule of H2. The conversion of ATP (the biological energy “currency”) into ADP drives the reaction. This reaction has a high chemical overpotential. It consumes much more energy than is needed for the actual ammonia forming reaction. Bottom: In the new reaction proposed by Ashida and colleagues, a mixture of water and samarium diiodide (SmI2) is converted to ammonia using nitrogen under ambient conditions and in the presence of a molybdenum catalyst. SmI2 weakens the O−H bonds of the water and generates the hydrogen atoms, which then react with atmospheric nitrogen.

On industrial scale, ammonia is synthesized at temperatures that exceed 400°C and pressures of approximately 400 atmospheres. These conditions are often referred to as “harsh”. During the early days, these harsh conditions were difficult to control. Fatal accidents were not uncommon in the early years of the Haber-Bosch development. This has motivated many chemists to find “milder” alternatives. After all, this always meant searching for new catalysts to lower operating temperatures and pressures. The search for new catalysts would ultimately reduce capital investment in the construction of new fertilizer plants. Since ammonia synthesis is one of the largest producers of carbon dioxide, this would also reduce the associated emissions.

Like many other chemists before them, the authors have been inspired by nature. Nitrogenase enzymes carry out the biological conversion of atmospheric nitrogen into ammonia, a process called nitrogen fixation. On recent Earth, this process is the source of nitrogen atoms in amino acids and nucleotides, the elemental building blocks of life. In contrast to the Haber-Bosch process, nitrogenases do not use hydrogen gas as a source of hydrogen atoms. Instead, they transfer protons (hydrogen ions, H+) and electrons (e) to each nitrogen atom to form N−H bonds. Although nitrogenases fix nitrogen at ambient temperature, they use eight protons and electrons per molecule N2. This is remarkable because the stoichiometry of the reaction requires only six each. This way, nitrogenases provide the necessary thermodynamic drive for nitrogen fixation. The excess of hydrogen equivalents means that nitrogenases have a high chemical overpotential. That is, they consume much more energy than would actually be needed for nitrogen fixation.

The now published reaction is not the first attempt to mimic the nitrogenase reaction. In the past, metal complexes were used with proton and electron sources to convert atmospheric nitrogen into ammonia. The same researchers have previously developed 8 molybdenum complexes that catalyze nitrogen fixation in this way. This produced 230 ammonia molecules per molybdenum complex. The associated overpotentials were significant at almost 1,300 kJ per mole nitrogen. In reality, however, the Haber-Bosch process is not so energy-intensive given the right catalyst is used.

The challenge for catalysis researchers is to combine the best biological and industrial approaches to nitrogen fixation so that the process proceeds at ambient temperatures and pressures. At the same time, the catalyst must reduce the chemical overpotential to such an extent that the construction of new fertilizer plants no longer requires such high capital investments. This is a major challenge as there is no combination of acids (which serve as a proton source) and reducing agents (the electron sources) available for the fixation at the thermodynamic level of hydrogen gas. This means that the mixture must be reactive enough to form N−H bonds at room temperature. In the now described pathway with molybdenum and samarium, the researchers have adopted a strategy in which the proton and electron sources are no longer used separately. This is a fundamentally new approach to catalytic ammonia synthesis. It makes use of a phenomenon known as coordination-induced bond weakening. In the proposed path, the phenomenon is based on the interaction of samarium diiodide (SmI2) and water.

Water is stable because of its strong oxygen-hydrogen bonds (O−H). However, when the oxygen atom in the water is coordinated with SmI2, it exposes its single electron pair and its O−H bonds are weakened. As a result, the resulting mixture becomes a readily available source of hydrogen atoms, protons and electrons, that is. The researchers around Yuya Ashida use this mixture with a molybdenum catalyst to fix nitrogen. SmI2-water mixtures are therefore particularly suitable for this type of catalysis. In them, a considerable coordination-induced bond weakening was previously measured, which was used inter alia for the production of carbon-hydrogen bonds.

The extension of this idea to catalytic ammonia synthesis is remarkable for two reasons. First, the molybdenum catalyst facilitates ammonia synthesis in aqueous solution. This is amazing because molybdenum complexes in water are usually degraded. Second, the use of coordination-induced bond weakening provides a new method for nitrogen fixation at ambient conditions. This also avoids the use of potentially hazardous combinations of proton and electron sources which are a fire hazard. The authors’ approach also works when ethylene glycol (HOCH2CH2OH) is used instead of water. Thus, the candidates for proton and electron sources are extended by an additional precursor.

Ashida and colleagues propose a catalytic cycle for their process in which the molybdenum catalyst initially coordinates to nitrogen and cleaves the N−N bond to form a molybdenum nitrido complex. This molybdenum nitrido complex contains the molybdenum-nitrogen triple bond. The SmI2-water mixture then delivers hydrogen atoms to this complex, eventually producing ammonia. The formation of N−H bonds with molybdenum nitrido complexes represents a significant thermodynamic challenge since the N−H bonds are also weakened by the molybdenum. Nevertheless, the disadvantages are offset by the reduction of the chemical overpotential. The SmI2 not only facilitates the transfer of hydrogen atoms, but also keeps the metal in a reduced form. This prevents undesired molybdenum oxide formation in aqueous solution.

The new process still has significant operational hurdles to overcome before it can be used on an industrial scale. For example, SmI2 is used in large quantities, which generates a lot of waste. The separation of ammonia from aqueous solutions is difficult in terms of energy consumption. However, if the process were used for energy storage in combination with our recovery method, the separation would be eliminated from the aqueous solution. Finally, there is still a chemical overpotential of about 600 kJ/mol. Future research should focus on finding alternatives to SmI2. These could be based, for example, on metals that occur more frequently than samarium and promote coordination-induced bond weakening as well. As Fritz Haber and Carl Bosch have experienced, the newly developed method will probably take some time for development before it becomes available on industrial scale.

(Photo: Wikipedia)

Posted on

Melting ice sheets in Greenland contribute 25% to sea level rise

Recently we reported the loss of snow cover in Europe. The snow is not only gone in many parts of Europe, also Greenland’s ice cover is melting. The Greenland ice sheet contributes 25% to global sea-level rise. This makes it the largest contribution of the cryosphere. The increased mass loss of Greenland ice during the 21st century is mainly due to the increased surface water runoff, of which ~93% come directly from the small ablation zone of the ice sheet (~22% of the ice surface). As the snow melts in the summer, bare glacier ice is more exposed in this ablation zone. Naked ice is darker and less porous than snow. It absorbs more than twice the solar radiation while also holding back less meltwater. Smooth ice produces a large proportion (~78%) of the total outflow of Greenland into the sea, although in summer only a small area of ​​the ice is exposed. Accurately capturing the reduced albedo and the full extent of bare ice in climate models is critical to determining Greenland’s present and future runoff contribution to sea-level rise.

The mass loss of the Greenland ice sheet has recently increased due to the accelerated melting of its surface. As this melting is critically affected by surface albedo, understanding the processes and potential feedbacks regardinng the albedo is required for accurately forecasting mass loss. The resulting radiation variability of the ablation zone caused the ice layer to melt five times faster compared with hydrological and biological processes, which also darken the ice sheet. Variations in the snow limits due to the shallower ice layer at higher altitudes have an even greater impact on melt when the climate is warmer. As a result of these fluctuations, the mapped ice surface during the summer of 2012, the record year of snowmelt, was the largest and had an area of 300,050 km2. That is, bare ice accounted for 16% of the ice surface. The smallest extent of bare ice was 184,660 km2 and was observed in 2006. This corresponded to 10% of the ice surface, i.e. almost 40% less area than in 2012. However, the observed snowpack variation was high and the observation period was too short for a solid trend assessment.

Current climate models are too inaccurate in predicting the sea level rise during flood years, leading to uncertainty in the estimation of Greenland’s contribution to global sea level rise. To understand the factors that influence melting, Jonathan Ryan of Brown University, Providence, Rhode Island, and his colleagues have investigated Greenland’s snow line. At altitudes below the snow line, the darker ice is not covered by snow. This snow line moves up or down during Greenland’s seasons. The researchers mapped these movements between 2001 and 2017 using satellite images. The average height of the snow line at the end of the summer in 2009 was between 1,330 m and then 1,650 m in 2012. The fluctuations in the snow line are the most important factor when it comes to how much solar energy the ice sheet absorbs. Modelers must consider this effect to improve their predictions. Knowing how much and how fast the Greenland ice melts will help us to take better protective measures. At Frontis Energy, we think that the best protection against sea-level rise is the prevention and recycling of CO2.

(Photo: Wikipedia)

Posted on

Economic losses caused by flooding due to global warming

In Europe, floods are linked to high fluctuations of atmospheric pressure. These variations are also known as the North Atlantic Oscillation. Stefan Zanardo and his colleagues at Risk Management Solutions, London, UK, analyzed historical records of severe floodings in Europe since 1870. They compared patterns of atmospheric pressure at the time of the floods. When the North Atlantic Oscillation is in a positive state, a depression over Iceland drives wind and storm throughout northern Europe. In a negative state, however, it makes southern Europe moister than usual. Normally, floods occur in northern Europe. They cause the most damage if the North Atlantic Oscillation was positive in winter. If enough rain has already fallen to saturate the soil, high risk conditions for flooding are met. Air pressure in Europe may change with global warming and public administrations should take this into account when assessing flood risk in a region, the researchers say.

This is important because flooding in Europe often causes loss of life, significant property damage , and business interruptions. Global warming will further worsen this situation. Risk distribution will change as well. The frequent occurrence of catastrophic flooding in recent years has sparked strong interest in this problem in both the public and private sectors. The public sector has been working to improve early warning systems. In fact, these early warning systems have economic benefits. In addition, various risk mitigating strategies have been implemented in European countries. These include flood protection, measures to increase risk awareness, and risk transfer through better dissemination of flood insurance. The fight against the root cause, global warming that is, however, is still far behind to what is needed.

Correlations between large-scale climate patterns, and in particular the North Atlantic Oscillation, and extreme events in the water cycle on the European continent have long been described in the literature. With with more severe and more often flooding as well as alarming global warming scenarios, raising concerns over future flood-related economic losses have become the focus of public attention. Although it is known that climatic patterns also control meteorological events, it is not always clear whether this link will affect the frequency and severeness fo flooding and the associated economic losses. In their study, the researchers relate the North Atlantic Oscillation to economic flood losses.

The researchers used recent data from flood databases as well as disaster models to establish this relation. The models allowed the quantification of the economic losses that ultimately caused by the North Atlantic Oscillation. These losses vary widely between the countries within the influence of the North Atlantic Oscillation. The study shows that the North Atlantic Oscillation can well predict the average losses in the long term. Based on the predictability of the North Atlantic Oscillation, the researchers argue that, in particular, the temporal variations of the flood risks caused by climate oscillations can be forecast. This can help to take encounter catastrophic flood events early on. As a result, flood damage can be minimized or even avoided. As scientists improve their predictions for the North Atlantic Oscillation, society will be better prepared for future flooding.

(Photo: Wikipedia, Stefan Penninger, Sweden)

Posted on

Ammonia energy storage #2

Recently, we reported on plans by Australian entrepreneurs and their government to use ammonia (NH3) to store excess wind energy. We proposed converting ammonia and CO2 from wastewater into methane gas (CH4), because it is more stable and easier to transport. The procedure follows the chemical equation:

8 NH3 + 3 CO2 → 4 N2 + 3 CH4 + 6 H2O

Now we have published a scientific article in the online magazine Frontiers in Energy Research where we show that the process is thermodynamically possible and does indeed occur. Methanogenic microbes in anaerobic digester sludge remove the hydrogen (H2) formed by electrolysis from the reaction equilibrium. As a result, the redox potentials of the oxidative (N2/NH3) and the reductive (CO2/CH4) half-reactions come so close that the process becomes spontaneous. It requires a catalyst in the form of wastewater microbes.

Pourbaix diagram of ammonium oxidation, hydrogen formation and CO2 reduction. At pH 7 and higher, the oxidation of ammonium coupled to methanogenesis becomes thermodynamically possible.

To prove our idea, we first searched for the right microbes that could carry out ammonia oxidation. For our experiments in microbial electrolysis cells we used microorganisms from sediments of the Atlantic Ocean off Namibia as starter cultures. Marine sediments are particularly suitable because they are relatively rich in ammonia, free from oxygen (O2) and contain less organic carbon than other ammonia-rich environments. Excluding oxygen is important because it used by ammonia-oxidizing microbes in a process called nitrification:

2 NH3+ + 3 O2 → 2 NO2 + 2 H+ + 2 H2O

Nitrification would have caused an electrochemical short circuit, as the electrons are transferred from the ammonia directly to the oxygen. This would have bypassed the anode (the positive electron accepting electrode) and stored the energy of the ammonia in the water − where it is useless. This is because, anodic water oxidation consumes much more energy than the oxidation of ammonia. In addition, precious metals are often necessary for water oxidation. Without producing oxygen at the anode, we were able to show that the oxidation of ammonium (the dissolved form of ammonia) is coupled to the production of hydrogen.

Oxidation of ammonium to nitrogen gas is coupled to hydrogen production in microbial electrolysis reactors. The applied potentials are +550 mV to +150 mV

It was important that the electrochemical potential at the anode was more negative than the +820 mV required for water oxidation. For this purpose, we used a potentiostat that kept the electrochemical potential constant between +550 mV and +150 mV. At all these potentials, N2 was produced at the anode and H2 at the cathode. Since the only source of electrons in the anode compartment was ammonium, the electrons for hydrogen production could come only from the ammonium oxidation. In addition, ammonium was also the only nitrogen source for the production of N2. As a result, the processes would be coupled.

In the next step, we wanted to show that this process also has a useful application. Nitrogen compounds are often found in wastewater. These compounds consist predominantly of ammonium. Among them are also drugs and their degradation products. At the same time, 1-2% of the energy produced worldwide is consumed in the Haber-Bosch process. In the Haber-Bosch process N2 is extracted from the air to produce nitrogen fertilizer. Another 3% of our energy is then used to remove the same nitrogen from our wastewater. This senseless waste of energy emits 5% of our greenhouse gases. In contrast, wastewater treatment plants could be net energy generators. In fact, a small part of the energy of wastewater has been recovered as biogas for more than a century. During biogas production, organic material from anaerobic digester sludge is decomposed by microbial communities and converted into methane:

H3C−COO + H+ + H2O → CH4 + HCO3 + H+; ∆G°’ = −31 kJ/mol (CH4)

The reaction produces CO2 and methane at a ratio of 1:1. Unfortunately, the CO2 in the biogas makes it almost worthless. As a result, biogas is often flared off, especially in places where natural gas is cheap. The removal of CO2 would greatly enhance the product and can be achieved using CO2 scrubbers. Even more reduced carbon sources can shift the ratio of CO2 to CH4. Nevertheless, CO2 would remain in biogas. Adding hydrogen to anaerobic digesters solves this problem technically. The process is called biogas upgrading. Hydrogen could be produced by electrolysis:

2 H2O → 2 H2 + O2; ∆G°’ = +237 kJ/mol (H2)

Electrolysis of water, however, is expensive and requires higher energy input. The reason is that the electrolysis of water takes place at a relatively high voltage of 1.23 V. One way to get around this is to replace the water by ammonium:

2 NH4+ → N2 + 2 H+ + 3 H2; ∆G°’ = +40 kJ/mol (H2)

With ammonium, the reaction takes place at only 136 mV, which saves the respective amount of energy. Thus, and with suitable catalysts, ammonium could serve as a reducing agent for hydrogen production. Microorganisms in the wastewater could be such catalysts. Moreover, without oxygen, methanogens become active in the wastewater and consume the produced hydrogen:

4 H2 + HCO3 + H+ → CH4 + 3 H2O; ∆G°’ = –34 kJ/mol (H2)

The methanogenic reaction keeps the hydrogen concentration so low (usually below 10 Pa) that the ammonium oxidation proceeds spontaneously, i.e. with energy gain:

8 NH4+ + 3 HCO3 → 4 N2 + 3 CH4 + 5 H+ + 9 H2O; ∆G°’ = −30 kJ/mol (CH4)

This is exactly the reaction described above. Bioelectrical methanogens grow at cathodes and belong to the genus Methanobacterium. Members of this genus thrive at low H2 concentrations.

The low energy gain is due to the small potential difference of ΔEh = +33 mV of CO2 reduction compared to the ammonium oxidation (see Pourbaix diagram above). The energy captured is barely sufficient for ADP phosphorylationG°’ = +31 kJ/mol). In addition, the nitrogen bond energy is innately high, which requires strong oxidants such as O2 (nitrification) or nitrite (anammox) to break them.

Instead of strong oxidizing agents, an anode may provide the activation energy for the ammonium oxidation, for example when poised at +500 mV. However, such positive redox potentials do not occur naturally in anaerobic environments. Therefore, we tested whether the ammonium oxidation can be coupled to the hydrogenotrophic methanogenesis by offering a positive electrode potential without O2. Indeed, we demonstrated this in our article and have filed a patent application. With our method one could, for example, profitably remove ammonia from industrial wastewater. It is also suitable for energy storage when e.g. Ammonia synthesized using excess wind energy.

Posted on

Nanomaterials in bio-electrical systems could improve performance

Since Professor Potter’s discovery of the ability of microbes to turn organic molecules into electricity using microbial fuel cells (MFC) more than a century ago (Potter MC, 1911, Proc Roy Soc Lond Ser B 84:260–276), much research was done to improve the performance. Unfortunately, this did not not produce an economically viable technology. MFCs never made it out of the professors’ class rooms. This may change now that we have advanced nanomaterials available.

The testing of nanomaterials in bio-electrical systems has experienced a Cambrian explosion. The focus usually was on electrodes, membranes, and in the electrolyte with infinite possibilities to find high performing composites. The benefits of such materials include a large surface area, cost savings, and scalability. All are required to successfully commercialize bio-electrical systems. The large-scale commercial application could be wastewater treatment. In our recently published literature survey we discovered that there is no common benchmark for performance, as it is usual in photovoltaics or for batteries. To normalize our findings, we used dollar per peak power capacity as ($/Wp) as it is standard in photovoltaics. The median cost for air cathodes of MFCs is $4,700 /Wp ($2,800 /m²). Platinum on carbon (Pt/C) and carbon nanofibers are the best performing materials with $500 /Wp (Pt/C $2,800 /m²; nanofibers $2,000 /m²).

We found that carbon-based nanomaterials often deliver performance comparable to Pt/C. While MFCs are still far away from being profitable, microbial electrolysis cells already are. With these new carbon-based nanomaterials, MFCs however, are moving closer to become an economic reality. Graphene and carbon nanotubes are promising materials when they are combined with minerals such as manganese or iron oxides. However, the price of graphene is still too expensive to let MFCs become an economic reality in wastewater treatment. The costs of microbial electrolysis, however, are already so low that it is a serious alternative to traditional wastewater treatment as we show in the featured image above. For high strength wastewater, a treatment plant can in fact turn into a power plant with excess power being offered to surrounding neighborhoods. Reducing the costs of microbial electrolysis is accomplished by using a combination of cheap steel and graphite.

Relationship between MEC reactor capacity and total electrode cost including anode and cathode. Errors are standard deviations of four different tubular reactor designs. Anodes are graphite granules and cathodes are steel pipes

 

Graphite, in turn, is the precursor of graphene, a promising material for MFC electrodes. When graphite flakes are reduced to a few graphene layers, some of the most technologically important properties of the material are greatly improved. These include the overall surface and the elasticity. Graphene is therefore a very thin graphite. Many manufacturers of graphene use this to sell a material that is really just cheap graphite. In the journal Advanced Materials Kauling and colleagues published a systematic study of graphene from sixty manufacturers and find that many high-priced graphene products consist mainly of graphite powder. The study found that less than 10% of the material in most products was graphene. None of the tested products contained more than 50% graphene. Many were heavily contaminated, most likely with chemicals used in the production process. This can often lead to a material having catalytic properties which would not have been observed without contamination, as reported by Wang and Pumera.

There are many methods of producing graphene. One of the simplest is the deposition on a metallic surface, as we describe it in our latest publication:

Single-layer graphene (SLG) and multilayer graphene (MLG) are synthesized by chemical vapor deposition (CVD) from a carbon precursor on catalytic metal surfaces. In a surface-mediated CVD process, the carbon precursor, e.g. Isopropyl alcohol (IPA) is decomposed on the metal surface, e.g. Cu or Ni. In order to control the number of graphene layers formed, the solubility of the carbon precursor on the metal catalyst surface must be taken into account. Due to the low solubility of the precursor in Cu, SLG can be formed. It is difficult to grow SLG on the surface of a metal with a high affinity for the precursor.

Protocol:
The protocol is a cheap, safe and simple method for the synthesis of MLG films by CVD in 30-45 minutes in a chemistry lab. A nickel foil is submersed in acetic acid for etching and then transferred to an airtight quartz tube. The same protects the system from ambient oxygen and water vapor. Nitrogen gas is bubbled through the IPA and the resulting IPA saturated gas is passed through the closed system. The exhaust gases are washed in a beaker with a water or a gas wash bottle. The stream is purged for 5 minutes at a rate of about 50 cc/min. As soon as the flame reaches a Meker burner 575-625 °C, it is positioned under the nickel foil, so that sufficient energy for the formation of graphene is available. The flame is extinguished after 5-10 minutes to stop the reaction and to cool the system for 5 minutes. The graphene-coated Ni foil is obtained.

But how thin must graphite flakes be to behave as graphene? A common idea supported by the International Organization for Standardization (ISO) is that flakes with more than ten graphene layers consist essentially of graphite. Thermodynamics say that each atomic layer in a flake with ten or fewer layers at room temperature behaves as a single graphene crystal. In addition, the stiffness of the graphite flakes increases with the layer thickness, which means that thin graphene flakes are orders of magnitude more elastic than thicker graphite flakes.

Unfortunately, to actually use graphene in bioelectric reactors, you still have to make it yourself. The ingredients can be found in our DIY Shop.

 
Posted on

Better heat exchangers for concentrated solar power

Solar thermal systems are a good example of the particle-wave dualism expressed in Planck’s constant h: E = hf. Where h is the Planck constant, f is the frequency of the light and E is the resulting energy. Thus, the higher the frequency of the light, the higher the amount of energy. Solar thermal metal collectors transform the energy of high-frequency light by generating them to an abundance of low-frequencies through Compton shifts. Glass or ceramic coatings with high visible and UV transmittance absorb the low frequency light generated by the metal because they effectively absorb infrared light (so-called heat blockers). The efficiency of the solar thermal system improves significantly with increasing size, which is also the biggest advantage of such systems compared to photovoltaic generators. One disadvantage, however, is the downstream transformation of heat into electricity with the help of heat exchangers and turbines − a problem not only in solar thermal systems.

To provide the hot gas (supercritical CO2) to the turbines, heat exchangers are necessary. These heat exchangers transfer the heat energy generated by a power plant to the working fluid in a heat engine (usually a steam turbine) that converts the heat into mechanical energy. Then, the mechanical energy is used to generate electricity. These heat exchangers are operated at ~800 Kelvin and could be more efficient if the temperature were at >1,000 Kelvin. The entire process of converting heat into electricity is called a power cycle and is a critical process in power generation by solar thermal plants. Obviously, heat exchangers are pivotal elements in this process.

Ceramics are a great material material for heat exchanger because they can withstand extreme temperature fluctuations. However, unlike metals, ceramics are not easy to shape. Relatively coarse shapes, in turn, are made quickly and easily. In contrast, metals can be easily formed and have a high mechanical strength. Metals and ceramics have been valued for centuries for their distinctive properties. For example, bronze and iron have good impact resistance and are so malleable that they have been made into complex shapes such as weapons and locks. Ceramics, like those used to make pottery, have been formed into simpler shapes. Their resistance to heat and corrosion made ceramics a valued material. A new composite of metal and ceramic (a so-called cermet) combines these properties in amazing ways. A research group led by Mario Caccia reported now in the prestigious journal Nature about a cermet with properties that makes it usable for heat exchangers in solar thermal systems.

The history of such composites goes back to the middle of the 20th century. The advent of jet engines has created a need for materials with high resistance to heat and oxidation. On top of that, they had to deal with rapid temperature changes. Their excellent mechanical strength, which often surpassed that of existing metals, was highly appreciated by the newly created aerospace industry. Not surprisingly, the US Air Force funded more research into the production of cermets. Cermets have since been developed for multiple applications, but in most cases have been used for small parts or surfaces. The newly released composite withstands extreme temperatures, high pressures and rapid temperature changes. It could increase the efficiency of heat exchangers in solar thermal systems by 20%.

To produce the composite, the authors first produced a precursor, which was subject to further processing, comparable to potting the unfired version of a clay pot. The authors compacted tungsten carbide powder into the approximate shape of the desired article (the heat exchanger) and heated it at 1,400 °C for 2 minutes to bond the parts together. They then further processed this porous preform to produce the desired final shape.

Next, the authors heated the preform in a chemically reducing atmosphere (a mixture of 4% hydrogen in argon) at 1,100 °C. At the same temperature, they immersed the preform in a tank of liquid zirconium and copper (Zr2Cu). Finally, the preform was removed by heating to 1,350 °C. In this process, the zirconium displaces the tungsten from the tungsten carbide, producing zirconium carbide (ZrC) as well as tungsten and copper. The liquid copper is displaced from the ZrC matrix as the material solidifies. The final object consists of ~58% ZrC ceramic and ~36% tungsten metal with small amounts of tungsten carbide and copper. The beauty of the method is that the porous preform is converted into a non-porous ZrC / tungsten composite of the same dimensions. The total volume change is about 1-2%.

The elegant manufacturing process is complemented by the robustness of the final product. At 800 °C, the ZrC / tungsten cermet conducts heat 2 to 3 times better than nickel based iron alloys. Such alloys are currently used in high-temperature heat exchangers. In addition to the improved thermal conductivity, the mechanical strength of the ZrC / tungsten composite is also higher than that of nickel alloys. The mechanical properties are not affected by temperatures of up to 800 ° C, even if the material has previously been subjected to heating, e.g. for cooling cycles between room temperature and 800 °C. In contrast, iron alloys, e.g. stainless steels, and nickel alloys loose at least 80% of their strength.

(Photo: Wikipedia)