Posted on

Bioelectrically engineered fuel produced by yeasts

Yeasts such as Saccharomyces cerevisiae are, as the name suggests, used for large scale production of beer and other alcoholic beverages. Their high salt and ethanol tolerance not only makes them useful for the production of beverages, but also suitable for the production of combustion fuels at high alcohol concentrations. Besides ethanol, long-chain fusel alcohols are of high interest for biofuel production as well. Bioethanol is already mixed with gasoline and thus improves the CO2 balance of internal combustion engines. This liquid biofuel is made from either starch or lignocellulose. The production and use of bioethanol supports local economies, reduces CO2 emissions and promotes self-sufficiency. The latter is especially important for resource-depleted landlocked countries.

In order to efficiently produce ethanol and other alcohols from lignocellulose hydrolysates, yeasts must use both glucose and pentoses such as xylose and arabinose. This is because biomass is rich in both lignocellulose and thus glucose and xylose. However, this is also the main disadvantage of using Saccharomyces cerevisiae because it does not ferment xylose. Consequently, the identification of another yeast strains capable of fermenting both these sugars could solve the problem. Highly efficient yeasts can be grown in co-cultures with other yeasts capable of lignocellulose fermentation for ethanol production. Such a yeast is, for example, Wickerhamomyces anomalous.

To further improve ethanol production, bioelectric fermentation technology supporting traditional fermentation can be used. The microbial metabolism can thus be controlled electrochemically. There are many benefits of this technology. The fermentation process becomes more selective due to the application of an electrochemical potential. This, in turn, increases the efficiency of sugar utilization. In addition, the use of additives to control the redox equilibrium and the pH is minimized. Ultimately cell growth can be stimulated, further increasing alcohol production.

Such bioelectric reactors are galvanic cells. The electrodes used in such a bioelectric reactor may act as electron acceptors (anodes) or source (cathodes). Such electrochemical changes affect the metabolism and cell regulation as well as the interactions between the yeasts used. Now, a research group from Nepal (a resource-depleted landlocked country) has used new yeast strains of Saccharomyces cerevisiae and Wickerhamomyces anomalous in a bioelectric fermenter to improve ethanol production from biomass. The results were published in the journal Frontiers in Energy Research.

For their study, the researchers chose Saccharomyces cerevisiae and Wickerhamomyces anomalus as both are good ethanol producers. The latter is to be able to convert xylose to ethanol. After the researchers applied a voltage to the bioelectrical system, ethanol production doubled. Both yeasts formed a biofilm on the electrodes, making the system ideal for use as a flow-through system because the microorganisms are not washed out.

Saccharomyces cerevisiae cells in a brightfield microscopic image of 600-fold magnification (Foto: Amanda Luraschi)

The researchers speculated that the increased ethanol production was due to the better conversion of pyruvate to ethanol − the yeast’s central metabolic mechanism. The researchers attributed this to accelerated redox reactions at the anode and cathode. The applied external voltage polarized the ions present in the cytosol, thus facilitating the electron transfer from the cathode. This and the accelerated glucose oxidation probably led to increased ethanol production.

Normally, pyruvate is converted into ethanol in fermentation yeast. External voltage input can control the kinetics of glucose metabolism in Saccharomyces cerevisiae under both aerobic and anaerobic conditions. Intracellular and transplasmembrane electron transfer systems play an important role in electron transport across the cell membrane. The electron transfer system consists of cytochromes and various redox enzymes, which confer redox activity to the membrane at certain sites.

The authors also found that an increased salt concentration improved conductivity and therefore ethanol production. The increased ethanol production from lignocellulosic biomass may have been also be due to the presence of various natural compounds that promoted yeast growth. When the cellulose acetate membrane was replaced by a Nafion™ membrane, ethanol production also increased. This was perhaps due to improved transport of xylose through the Nafion™ membrane as well as the decrease of the internal resistance. A further increase of ethanol production was observed when the bioelectrical reactor was operated with fine platinum particles coated on the platinum anode and neutral red deposited on the graphite cathode.

Several yeast cultures from left to right: Saccharomyces cerevisiae, Candida utilis, Aureobasidium pullulans, Trichosporum cutaneum, Saccharomycopsis capsularis, Saccharomycopsis lipolytica, Hanseniaspora guilliermondii, Hansenula capsulata, Saccharomyces carlsbergensis, Saccharomyces rouxii, Rhodotorula rubra, Phaffia rhodozyba, Cryptococcus laurentii, Metschnikowia pulcherrima, Rhodotorula pallida

At Frontis Energy, we think that the present study is promising. However, long-chain fusel alcohols should be considered in the future as they are less volatile and better compatible with current internal combustion engines. These can also be easily converted into the corresponding long-chain hydrocarbons.

Posted on

Framework for a global carbon budget

Over the past decade, numerous studies have shown that global warming is roughly proportional to the concentration of CO2 in our atmosphere. In this way one can estimate our remaining carbon budget. This is the total amount of man-made carbon dioxide that can still be released into the atmosphere before reaching a set global temperature limit. The nations of the world agreed on this limit in the 2015 Paris Agreement. It should not exceed 1.5°C, and in any case be well below 2.0°C. However, diverging estimates have been made for the remaining carbon budget, which has a negative impact on policy-making. Now, an international research group of renown climate experts has published a framework for the calculation of the global CO2 budget in Nature. The researchers suggest that the application of this framework should help to overcome the differences when estimating the carbon budget, which will help to reduce uncertainties in research and policy.

Since the fifth report of the Intergovernmental Panel on Climate Change (IPCC), the concept of a carbon budget has become more important as an instrument for guiding climate policy. Over the past decade, a series of studies  has clarified why the increase in the global average temperature is roughly proportional to the total amount of CO2 emissions caused by human activity since the Industrial Revolution. In the framework, the research group cites numerous published documents that provide evidence for the linearity of this correlation. This literature has allowed scientists to define the linear relationship between warming and CO2 emissions as a transient climate response to cumulative CO2 emissions (TCRE). The linearity is an appealing concept because of the complexity of the Earth’s response to our CO2 emissions. Additional processes that affect future warming have been included in recent models, among them, for example, the thawing of the Arctic permafrost. These additional processes increase the uncertainty of current climate  models. In addition, global warming is not just caused by CO2 emissions. Other greenhouse gases, such as methane, fluorinated gases or nitrous oxide, as well as aerosols and their precursors affect global temperatures. This further complicates the relationship between future CO2.

In the case of global warming caused by CO2, every tonne contributes to warming, whether that ton is emitted in future, now or in the last century. This means that global CO2 emissions must be reduced to zero, and then remain zero. This also means that the more we emit in the next years, the faster we have to reduce our emissions later. At zero emissions, warming would stabilize, but not disappear. It may also reverse. An overdraft of the carbon budget would have to be compensated by removing the CO2 later. One way of removing CO2 from the atmosphere would be a technology called direct air capture, which we reported earlier. Ultimately, this will probably be the only way left, as carbon neutral renewable energy source sources only make up 5% of our energy mix. Establishing a global carbon budget will further highlights the urgency of our clean energy transition. Unfortunately, there is a large divergence when it comes the amount of the CO2 remaining in our carbon budget. In their framework, the researchers cite numerous studies on carbon budgets to maintain our 1.5°C target. Starting 2018, these range from 0 tonnes of CO2 to 1,000 gigatons. For the 2.0°C target, our carbon budget ranges from around 700 gigatons to nearly 2,000 gigatons of remaining CO2 emissions. The aim of the researchers is to limit this uncertainty by establishing a budget framework. The central element is the equation for calculating the remaining carbon budget:

Blim = (TlimThistTnonCO2Thist) / TCRE − EEsfb

The budget of the remaining CO2 emissions (Blim) for the specific temperature limit (Tlim) is a function of five terms that represent aspects of the geophysical and human-environment systems: the historical man-made warming (Thist), the non-CO2 contribution to the future temperature increase (TnonCO2), the zero emission commitment (TZEC), the TCRE, and an adaptation for sources from possible unrepresented Earth system feedback (EEsfb).

 

Term Key choices or uncertainties Type Level of understanding
Temperature limit Tlim Choice of temperature metrics that allow global warming, the choice of pre-industrial reference and consistency with global climate targets Choice Medium to high
Historical man-made warming Thist Incomplete data and methods for estimating the man-made component; see also Tlim Choice and uncertainty Medium to high
Non-CO2 contribution to future global warming TnonCO2 The level of non-CO2 contributions coinciding with global net zero CO2 emissions; depends on policy choices, but also on the uncertainty of their implementation Choice and uncertainty Medium
Non-CO2 contribution to future global warming TnonCO2 Climate reaction to non-CO2 forcers, such as aerosols and methane Uncertainty Low to medium
Zero-emissions commitment TZEC The extent of the decadal zero emission commitment and near-zero annual carbon emissions Uncertainty Low
Transient climate response to cumulative emissions of CO2 TCRE TCRE uncertainty, linearity and cumulative CO2 emissions that affect temperature metrics of the TCRE estimate Uncertainty Low to medium
Transient climate response to cumulative emissions of CO2 TCRE Uncertainty of the TCRE linearity, value and distribution beyond peak heating which is affected by cumulative CO2 emissions reduction
Uncertainty Low
Unrepresented Earth system feedback mechanisms EEsfb Impact of permafrost thawing and duration as well as methane release from wetlands on geomodels and feedback Uncertainty Very low

In the CO2 budget, the unrepresented Earth system feedback (EEsfb) is arguably the greatest uncertainty. These feedback processes are typically associated with the thawing of permafrost and the associated long-term release of CO2 and CH4. However, other sources of feedback have been identified as well. This include, for example, the variations of CO2 uptake by the vegetation and the associated nitrogen availability. Further feedback processes involve changes in surface albedo, cloud cover, or fire conditions.

It remains a challenge to adequately characterize the uncertainties surrounding the estimates of our carbon budget. In some cases, the reason of these uncertainties is inaccurate knowledge of the underlying processes or inaccurate measurements. In other cases the terminology is used inconsistently. For better comparability and flexibility, the researchers propose to routinely measure global surface air temperature values. This method gives robust data for models and model runs over selected time periods. More detailed comparisons between published estimates of the carbon budget are currently difficult because the original data used for publication often are missing. The researchers therefore propose to provide these in the future along with publications.

Breaking down the carbon budget into its individual factors makes it possible to identify a number of promising pathways for future research. One area of ​​research that might advance this field is to look more closely at the TCRE. Future research is expected to narrow down the range of TCRE uncertainties. Another promising area of ​​research is the study of the correlation between individual factors and their associated uncertainties, for example, between uncertainties in Thist and TnonCO2. This could be achieved by developing methods that allow a more reliable estimate of historical human-induced warming. It is also clear that less complex climate models are useful to further reduce the uncertainties of climate models, and hence the carbon budget. Currently, each factor of the framework presented by yhr researchers has its own uncertainties, and there is no method to formally combine them.

At Frontis Energy, too, we think that progress in these areas would improve our understanding of the estimates of our carbon budget. A systematic understanding of the carbon budget and is crucial for effectively addressing global warming challenges.

Posted on

Cheap, high-octane biofuel discovered

Researchers from the National Renewable Energy Laboratory (NREL) have developed a cheap method for producing high-octane gasoline from methanol. They recently published their method in the journal Nature Catalysis. Methanol can be synthesized from CO2 via various routes, as we reported last year. Biomass, such as wood, is one possibility.

The production of biofuels from wood, however, is too expensive to compete with fossil fuels. To find a solution to this problem, the researchers combined their basic research with an economic analysis. The researchers initially aimed at the most expensive part of the process. Thereafter, the researchers found methods to reduce these costs with methanol as an intermediate.

So far, the cost of converting methanol to gasoline or diesel was about $1 per gallon. The researchers have now reached a price of about $0.70 per gallon.

The catalytic conversion of methanol into gasoline is an important research area in the field of CO2 recovery. The traditional method is based on multi-stage processes and high temperatures. It is expensive, producing low quality fuel in small quantities. Thus, it is not competitive with petroleum-based fuels.

Hydrogen deficiency was the initially problem the researcher had to overcome. Hydrogen is the key energy containing element in hydrocarbons. The researchers hypothesized that using the transition metal copper would solve this problem, which it did. They estimated that the copper-infused catalyst resulted in 38% more yield at lower cost.

By facilitating the reintegration of C4 byproducts during the homologation of dimethyl ether, the copper zeolite catalyst enabled this 38% increase in product yield and a 35% reduction in conversion cost compared to conventional zeolite catalysts. Alternatively, C4 by-products were passed to a synthetic kerosene meeting five specifications for a typical jet fuel. Then, the fuel synthesis costs increased slightly. Even though the cost savings are minimal, the resulting product has a higher value.

Apart from the costs, the new process offers users further competitive advantages. For example, companies can compete with ethanol producers for credits for renewable fuels (if the carbon used comes from biogas or household waste). The process is also compatible with existing methanol plants that use natural gas or solid waste to produce syngas.

Posted on

Semiconductive nanotubes with photovoltaic effect

Cost-effective and efficient methods for converting sunlight into electricity are the focus of green energy research. Solar cells developed for this purpose are currently made of semiconductors such as silicon. Electrical energy is generated at the junction between two different semiconductors. However, the efficiency of these solar cells has almost reached its theoretical limit. New methods of converting sunlight into electricity must be found if solar energy is to be used as a major source of electricity. An international research team from Germany, Japan and Israel has now made important progress in this direction. Zhang and colleagues recently published their findings in the prestigious journal Nature. They demonstrate a transition-free solar cell that can be made by applying a more atomic semiconductor layer into a nanotube.

In a conventional solar cell, two regions of a semiconductor are doped with different chemical elements. The electrical current is generated by the negatively charged electrons of a region and by the positively charged electron holes (holes). At the junction between these two areas, an electric field is created. When sunlight is absorbed at this junction, electron-hole pairs are formed. The electrons and holes are then separated by the resulting electric field, generating an electric current. This conversion of solar energy into electricity is called photovoltaic effect. This photovoltaic effect is particularly important for green energy production. Its efficiency has almost reached the theoretical limit as mentioned above.

In technical terms, the photovoltaic effect occurs at traditional pn junctions, where a p-type material (with an excess of holes) adjoins an n-type material (with an excess of electrons). Current is generated in the photo-induced generation of electron-hole pairs and their subsequent separation. Further advances are expected through the use of other photovoltaic effects that do not require transition and only occur in crystals with broken inversion symmetry. However, the practical implementation of these effects is impeded by the low efficiency of the materials. Semiconductors with reduced dimensionality or smaller band gap have shown to be more efficient. Transition metal dichalcogenides (TMDs) are, for example, two-dimensional small-bandgap semiconductors in which various effects were observed by breaking the inversion symmetry in their bulk crystals.

The reported bulk photovoltaic effect (BPVE) is based on tungsten disulfide, a member of the TMD family. Crystals of this material have a layered structure and can be stratified in layers similar to graphite. The resulting atomic sheets can then be rolled into tubes of 100 nanometers by chemical methods. The authors produced photovoltaic devices from three types of tungsten disulfide: a monolayer, a bilayer and a nanotube.

A systematic reduction in crystal symmetry has been achieved beyond mere fractional symmetry inversion. The transition from a two-dimensional monolayer to a nanotube with polar properties has been significantly improved. The photovoltaic current density produced is orders of magnitude greater than that of other comparable materials. The results not only confirm the potential of TMD-based nanomaterials, but also the importance of reducing crystal symmetry for improving the BPVE.

While the nanotube devices had a large BPVE, the single-layer and two-layer devices produced only a negligible electric current under illumination. The researchers attribute the different performance characteristics of the solar cells to their pronounced crystal symmetry. This way, one can spontaneously generate a current in uniform semiconductors, without a transition.

The BPVE was first observed in 1956 at Bell Labs, New Jersey, just two years after the invention of modern silicon solar cells. The effect is limited to non-centrosymmetric materials characterized by a lack of symmetry in spatial inversion. That is, the combination of a 180° rotation and a reflection. The effect has two attractive properties: the current generated by light depends on the polarization of the incident light and the associated voltage is greater than the band gap of the material. This is the energy required to excite conducting free electrons. However, the effect typically has a low conversion efficiency and was therefore of rather academic than industrial interest.

To achieve high efficiency, a photovoltaic material must have high light absorption and low internal symmetry. However, these two properties usually do not exist simultaneously in a given material. Semiconductors that absorb most of the incident sunlight generally have high symmetry. This reduces or even prevents the effect. Low-symmetry materials, such as perovskite oxides, absorb little sunlight due to their large band gap. To circumvent this problem, efforts have been made to improve light absorption in low-symmetry materials, for example by using the mentioned doping. Meanwhile, it has been shown that the effect can occur in semiconductors by using mechanical fields to adjust the crystal symmetry of the material.

The newly discovered solution is encouraging with regard to the production of high absorption semiconducting nanotubes. In the case of tungsten disulfide, the crystal symmetry of the nanotubes is reduced compared to the mono- and bilayers due to the curved walls of the tube. The combination of excellent light absorption and low crystal symmetry means that the nanotubes have a significant photovoltaic effect. The current density exceeds that of materials which are inherently low in symmetry. Nevertheless, the conversion efficiency achieved is still much lower than that of the photovoltaic effect in conventional junction-based solar cells.

The authors’ findings demonstrate the great potential of nanotubes in solar energy production and raise various technological and scientific challenges. From an application’s perspective, it would be useful to produce a solar cells that consists of a large arrays of semiconductor nanotubes to check whether the approach is scalable. The direction of the generated current would be largely determined by the internal symmetry of the material. Therefore, uniform symmetry across the nanotube array would be required to create a collective current. These currents could cancel each other out.

At Frontis Energy, we wonder if the method described could work with the classic photovoltaic effect in the same solar cell. That would possibly increase overall efficiency. The two effects could use the solar energy consecutively. Despite the remaining challenges, the presented work offers a possibility for the development of highly efficient solar cells.

(Photo: Wikipedia)

Posted on

Machine learning makes smarter batteries

Renewable energies, such as wind and solar energy are naturally intermittent. To balance their demand and supply, batteries of, for example, electric vehicles can be charged and act as an energy buffer for the power grid. Cars spend most of their time idle and could, at the same time, feed their electricity back into the grid. While this is still a dream of the future, commercialization of electric and hybrid vehicles is already creating a growing demand for long-lasting batteries, both for driving as well as grid buffering. Consequently, methods for evaluating the state of the battery will become increasingly important.

The long duration of battery health tests is a problem, hindering the rapid development of new batteries. Better battery life forcasting methods are therefore urgently needed but are extremely difficult to develop. Now, Severson and her colleagues report in the journal Nature Energy that machine learning can help to predict computer battery life by creating computer models. The published algorithms use data from early-stage charge and discharge cycles.

Normally, a figure of merit describes the health of a battery. It quantifies the ability of the battery to store energy relative to its original state. The health status is 100% when the battery is new and decreases with time. This is similar to the state of charge of a battery. Estimating the state of charge of a battery is, in turn, important to ensure safe and correct use. However, there is no consensus in the industry and science as to what exactly a battery’s health status is or how it should be determined.

The state of health of a battery reflects two signs of aging: progressive capacity decline and impedance increase (another measure of electrical resistance). Estimates of the state of charge of a battery must therefore take into account both the drop in capacity and the increase in impedance.

Lithium ion batteries, however, are complex systems in which both capacity fade and impedance increase are caused by multiple interacting processes. Most of these processes cannot be studied independently since they often occur in simultaneously. The state of health can therefore not be determined from a single direct measurement. Conventional health assessment methods include examining the interactions between the electrodes of a battery. Since such methods often intervene directly in the system “battery”, they make the battery useless, which is hardly desired.

A battery’s health status can also be determined in less invasive ways, for example using adaptive models and experimental techniques. Adaptive models learn from recorded battery performance data and adjust themselves. They are useful if system-specific battery information are not available. Such models are suitable for the diagnosis of aging processes. The main problem, however, is that they must be trained with experimental data before they can be used to determine the current capacity of a battery.

Experimental techniques are used to evaluate certain physical processes and failure mechanisms. This allows the rate of future capacity loss to be estimated. Unfortunately, these methods can not detect any intermittent errors. Alternative techniques use the rate of voltage or capacitance change (rather than raw voltage and current data). In order to accelerate the development of battery technology, further methods need to be found which can accurately predict the life of the batteries.

Severson and her colleagues have created a comprehensive data set that includes the performance data of 124 commercial lithium-ion batteries during their charge and discharge cycles. The authors used a variety of rapid charging conditions with identical discharge conditions. This method caused a change of the battery lives. The data covered a wide range of 150 to 2,300 cycles.

The researchers then used machine learning algorithms to analyze the data, creating models that can reliably predict battery life. After the first 100 cycles of each experimentally characterized battery their model already showed clear signs of a capacity fade. The best model could predict the lifetime of about 91% data sets studied in the study. Using the first five cycles, batteries could be classified into categories with short (<550 cycles) or long lifetimes.

The researchers’ work shows that data-driven modeling using machine learning allows forecasting the state of health of lithium-ion batteries. The models can identify aging processes that do not otherwise apparent in capacity data during early cycles. Accordingly, the new approach complements the previous predictive models. But at Frontis Energy, we also see the ability to combine generated data with models that predict the behavior of other complex dynamic systems.

(Photo: Wikipedia)

Posted on

Faster photoelectrical hydrogen

Achieving high current densities while maintaining high energy efficiency is one of the biggest challenges in improving photoelectrochemical devices. Higher current densities accelerate the production of hydrogen and other electrochemical fuels.

Now a compact, solar-powered, hydrogen-producing device has been developed that provides the fuel at record speed. In the journal Nature Energy, the researchers around Saurabh Tembhurne describe a concept that allows capturing concentrated solar radiation (up to 474 kW/m²) by thermal integration, mass transport optimization and better electronics between the photoabsorber and the electrocatalyst.

The research group of the Swiss Federal Institute of Technology in Lausanne (EPFL) calculated the maximum increase in theoretical efficiency. Then, they experimentally verified the calculated values ​​using a photoabsorber and an iridium-ruthenium oxide-platinum based electrocatalyst. The electrocatalyst reached a current density greater than 0.88 A/cm². The calculated conversion efficiency of solar energy into hydrogen was more than 15%. The system was stable under various conditions for more than two hours. Next, the researchers want to scale their system.

The produced hydrogen can be used in fuel cells for power generation, which is why the developed system is suitable for energy storage. The hydrogen-powered generation of electricity emits only pure water. However, the clean and fast production of hydrogen is still a challenge. In the photoelectric method, materials similar to those of solar modules were used. The electrolytes were based on water in the new system, although ammonia would also be conceivable. Sunlight reaching these materials triggers a reaction in which water is split into oxygen and hydrogen. So far, however, all photoelectric methods could not be used on an industrial scale.

2 H2O → 2 H2 + O2; ∆G°’ = +237 kJ/mol (H2)

The newly developed system absorbed more than 400 times the amount of solar energy that normally shines on a given area. The researchers used high-power lamps to provide the necessary “solar energy”. Existing solar systems concentrate solar energy to a similar degree with the help of mirrors or lenses. The waste heat is used to accelerate the reaction.

The team predicts that the test equipment, with a footprint of approximately 5 cm, can produce an estimated 47 liters of hydrogen gas in six hours of sunshine. This is the highest rate per area for such solar powered electrochemical systems. At Frontis Energy we hope to be able to test and offer this system soon.

(Photo: Wikipedia)

Posted on

Ammonia energy storage #3

As a loyal reader or loyal reader of our blog, you will certainly remember our previous publications on ammonia energy storage. There, we describe possible ways to extract ammonia from the air, as well as the recovery of its energy in the form of methane (patent pending WO2019/079908A1). Since global food production requires large amounts of ammonia fertilizers, technologies for extraction from air is already very mature. These technologies are essentially all based on the Haber-Bosch process, which was industrialized at the beginning of the last century. During this process, atmospheric nitrogen (N2) is reduced to ammonia (NH3). Despite the simplicity of the molecules involved, the cleavage of the strong nitrogen−nitrogen bonds in N2 and the resulting nitrogen−hydrogen bonds pose a major challenge for catalytic chemists. The reaction usually takes place under harsh conditions and requires a lot of energy, i.e. high reaction temperatures, high pressures and complicated combinations of reagents, which are also often expensive and energy-intensive to manufacture.

Now, a research group led by Yuya Ashida has published an article in the renowned journal Nature, in which they show that a samarium compound in aqueous solution combined with a molybdenum catalyst can form ammonia from atmospheric nitrogen. The work opens up new possibilities in the search for ways to ammonia synthesis under ambient conditions. Under such conditions, less energy is required to produce ammonia, resulting in higher energy efficiency for energy storage. In today’s Haber-Bosch process, air and hydrogen gas are combined via an iron catalyst. The resulting global ammonia production of this process ranges from 250 to 300 tonnes per minute, delivering fertilizers that provide nearly 60% of the world’s population (The Alchemy of Air, available at Amazon).

Comparison of different approaches to produce ammonia. Top: In the industrial Haber-Bosch synthesis of ammonia (NH3), nitrogen gas (N2) reacts with hydrogen molecules (H2), typically in the presence of an iron catalyst. The process requires high temperatures and pressures, but is thermodynamically ideal because only little energy is wasted on side reactions. Center: Nitrogenase enzymes catalyze the reaction of six-electron (e) nitrogen and six protons (H+) under ambient conditions to form ammonia. However, two additional electrons and protons form one molecule of H2. The conversion of ATP (the biological energy “currency”) into ADP drives the reaction. This reaction has a high chemical overpotential. It consumes much more energy than is needed for the actual ammonia forming reaction. Bottom: In the new reaction proposed by Ashida and colleagues, a mixture of water and samarium diiodide (SmI2) is converted to ammonia using nitrogen under ambient conditions and in the presence of a molybdenum catalyst. SmI2 weakens the O−H bonds of the water and generates the hydrogen atoms, which then react with atmospheric nitrogen.

On industrial scale, ammonia is synthesized at temperatures that exceed 400°C and pressures of approximately 400 atmospheres. These conditions are often referred to as “harsh”. During the early days, these harsh conditions were difficult to control. Fatal accidents were not uncommon in the early years of the Haber-Bosch development. This has motivated many chemists to find “milder” alternatives. After all, this always meant searching for new catalysts to lower operating temperatures and pressures. The search for new catalysts would ultimately reduce capital investment in the construction of new fertilizer plants. Since ammonia synthesis is one of the largest producers of carbon dioxide, this would also reduce the associated emissions.

Like many other chemists before them, the authors have been inspired by nature. Nitrogenase enzymes carry out the biological conversion of atmospheric nitrogen into ammonia, a process called nitrogen fixation. On recent Earth, this process is the source of nitrogen atoms in amino acids and nucleotides, the elemental building blocks of life. In contrast to the Haber-Bosch process, nitrogenases do not use hydrogen gas as a source of hydrogen atoms. Instead, they transfer protons (hydrogen ions, H+) and electrons (e) to each nitrogen atom to form N−H bonds. Although nitrogenases fix nitrogen at ambient temperature, they use eight protons and electrons per molecule N2. This is remarkable because the stoichiometry of the reaction requires only six each. This way, nitrogenases provide the necessary thermodynamic drive for nitrogen fixation. The excess of hydrogen equivalents means that nitrogenases have a high chemical overpotential. That is, they consume much more energy than would actually be needed for nitrogen fixation.

The now published reaction is not the first attempt to mimic the nitrogenase reaction. In the past, metal complexes were used with proton and electron sources to convert atmospheric nitrogen into ammonia. The same researchers have previously developed 8 molybdenum complexes that catalyze nitrogen fixation in this way. This produced 230 ammonia molecules per molybdenum complex. The associated overpotentials were significant at almost 1,300 kJ per mole nitrogen. In reality, however, the Haber-Bosch process is not so energy-intensive given the right catalyst is used.

The challenge for catalysis researchers is to combine the best biological and industrial approaches to nitrogen fixation so that the process proceeds at ambient temperatures and pressures. At the same time, the catalyst must reduce the chemical overpotential to such an extent that the construction of new fertilizer plants no longer requires such high capital investments. This is a major challenge as there is no combination of acids (which serve as a proton source) and reducing agents (the electron sources) available for the fixation at the thermodynamic level of hydrogen gas. This means that the mixture must be reactive enough to form N−H bonds at room temperature. In the now described pathway with molybdenum and samarium, the researchers have adopted a strategy in which the proton and electron sources are no longer used separately. This is a fundamentally new approach to catalytic ammonia synthesis. It makes use of a phenomenon known as coordination-induced bond weakening. In the proposed path, the phenomenon is based on the interaction of samarium diiodide (SmI2) and water.

Water is stable because of its strong oxygen-hydrogen bonds (O−H). However, when the oxygen atom in the water is coordinated with SmI2, it exposes its single electron pair and its O−H bonds are weakened. As a result, the resulting mixture becomes a readily available source of hydrogen atoms, protons and electrons, that is. The researchers around Yuya Ashida use this mixture with a molybdenum catalyst to fix nitrogen. SmI2-water mixtures are therefore particularly suitable for this type of catalysis. In them, a considerable coordination-induced bond weakening was previously measured, which was used inter alia for the production of carbon-hydrogen bonds.

The extension of this idea to catalytic ammonia synthesis is remarkable for two reasons. First, the molybdenum catalyst facilitates ammonia synthesis in aqueous solution. This is amazing because molybdenum complexes in water are usually degraded. Second, the use of coordination-induced bond weakening provides a new method for nitrogen fixation at ambient conditions. This also avoids the use of potentially hazardous combinations of proton and electron sources which are a fire hazard. The authors’ approach also works when ethylene glycol (HOCH2CH2OH) is used instead of water. Thus, the candidates for proton and electron sources are extended by an additional precursor.

Ashida and colleagues propose a catalytic cycle for their process in which the molybdenum catalyst initially coordinates to nitrogen and cleaves the N−N bond to form a molybdenum nitrido complex. This molybdenum nitrido complex contains the molybdenum-nitrogen triple bond. The SmI2-water mixture then delivers hydrogen atoms to this complex, eventually producing ammonia. The formation of N−H bonds with molybdenum nitrido complexes represents a significant thermodynamic challenge since the N−H bonds are also weakened by the molybdenum. Nevertheless, the disadvantages are offset by the reduction of the chemical overpotential. The SmI2 not only facilitates the transfer of hydrogen atoms, but also keeps the metal in a reduced form. This prevents undesired molybdenum oxide formation in aqueous solution.

The new process still has significant operational hurdles to overcome before it can be used on an industrial scale. For example, SmI2 is used in large quantities, which generates a lot of waste. The separation of ammonia from aqueous solutions is difficult in terms of energy consumption. However, if the process were used for energy storage in combination with our recovery method, the separation would be eliminated from the aqueous solution. Finally, there is still a chemical overpotential of about 600 kJ/mol. Future research should focus on finding alternatives to SmI2. These could be based, for example, on metals that occur more frequently than samarium and promote coordination-induced bond weakening as well. As Fritz Haber and Carl Bosch have experienced, the newly developed method will probably take some time for development before it becomes available on industrial scale.

(Photo: Wikipedia)

Posted on

Melting ice sheets in Greenland contribute 25% to sea level rise

Recently we reported the loss of snow cover in Europe. The snow is not only gone in many parts of Europe, also Greenland’s ice cover is melting. The Greenland ice sheet contributes 25% to global sea-level rise. This makes it the largest contribution of the cryosphere. The increased mass loss of Greenland ice during the 21st century is mainly due to the increased surface water runoff, of which ~93% come directly from the small ablation zone of the ice sheet (~22% of the ice surface). As the snow melts in the summer, bare glacier ice is more exposed in this ablation zone. Naked ice is darker and less porous than snow. It absorbs more than twice the solar radiation while also holding back less meltwater. Smooth ice produces a large proportion (~78%) of the total outflow of Greenland into the sea, although in summer only a small area of ​​the ice is exposed. Accurately capturing the reduced albedo and the full extent of bare ice in climate models is critical to determining Greenland’s present and future runoff contribution to sea-level rise.

The mass loss of the Greenland ice sheet has recently increased due to the accelerated melting of its surface. As this melting is critically affected by surface albedo, understanding the processes and potential feedbacks regardinng the albedo is required for accurately forecasting mass loss. The resulting radiation variability of the ablation zone caused the ice layer to melt five times faster compared with hydrological and biological processes, which also darken the ice sheet. Variations in the snow limits due to the shallower ice layer at higher altitudes have an even greater impact on melt when the climate is warmer. As a result of these fluctuations, the mapped ice surface during the summer of 2012, the record year of snowmelt, was the largest and had an area of 300,050 km2. That is, bare ice accounted for 16% of the ice surface. The smallest extent of bare ice was 184,660 km2 and was observed in 2006. This corresponded to 10% of the ice surface, i.e. almost 40% less area than in 2012. However, the observed snowpack variation was high and the observation period was too short for a solid trend assessment.

Current climate models are too inaccurate in predicting the sea level rise during flood years, leading to uncertainty in the estimation of Greenland’s contribution to global sea level rise. To understand the factors that influence melting, Jonathan Ryan of Brown University, Providence, Rhode Island, and his colleagues have investigated Greenland’s snow line. At altitudes below the snow line, the darker ice is not covered by snow. This snow line moves up or down during Greenland’s seasons. The researchers mapped these movements between 2001 and 2017 using satellite images. The average height of the snow line at the end of the summer in 2009 was between 1,330 m and then 1,650 m in 2012. The fluctuations in the snow line are the most important factor when it comes to how much solar energy the ice sheet absorbs. Modelers must consider this effect to improve their predictions. Knowing how much and how fast the Greenland ice melts will help us to take better protective measures. At Frontis Energy, we think that the best protection against sea-level rise is the prevention and recycling of CO2.

(Photo: Wikipedia)

Posted on

Economic losses caused by flooding due to global warming

In Europe, floods are linked to high fluctuations of atmospheric pressure. These variations are also known as the North Atlantic Oscillation. Stefan Zanardo and his colleagues at Risk Management Solutions, London, UK, analyzed historical records of severe floodings in Europe since 1870. They compared patterns of atmospheric pressure at the time of the floods. When the North Atlantic Oscillation is in a positive state, a depression over Iceland drives wind and storm throughout northern Europe. In a negative state, however, it makes southern Europe moister than usual. Normally, floods occur in northern Europe. They cause the most damage if the North Atlantic Oscillation was positive in winter. If enough rain has already fallen to saturate the soil, high risk conditions for flooding are met. Air pressure in Europe may change with global warming and public administrations should take this into account when assessing flood risk in a region, the researchers say.

This is important because flooding in Europe often causes loss of life, significant property damage , and business interruptions. Global warming will further worsen this situation. Risk distribution will change as well. The frequent occurrence of catastrophic flooding in recent years has sparked strong interest in this problem in both the public and private sectors. The public sector has been working to improve early warning systems. In fact, these early warning systems have economic benefits. In addition, various risk mitigating strategies have been implemented in European countries. These include flood protection, measures to increase risk awareness, and risk transfer through better dissemination of flood insurance. The fight against the root cause, global warming that is, however, is still far behind to what is needed.

Correlations between large-scale climate patterns, and in particular the North Atlantic Oscillation, and extreme events in the water cycle on the European continent have long been described in the literature. With with more severe and more often flooding as well as alarming global warming scenarios, raising concerns over future flood-related economic losses have become the focus of public attention. Although it is known that climatic patterns also control meteorological events, it is not always clear whether this link will affect the frequency and severeness fo flooding and the associated economic losses. In their study, the researchers relate the North Atlantic Oscillation to economic flood losses.

The researchers used recent data from flood databases as well as disaster models to establish this relation. The models allowed the quantification of the economic losses that ultimately caused by the North Atlantic Oscillation. These losses vary widely between the countries within the influence of the North Atlantic Oscillation. The study shows that the North Atlantic Oscillation can well predict the average losses in the long term. Based on the predictability of the North Atlantic Oscillation, the researchers argue that, in particular, the temporal variations of the flood risks caused by climate oscillations can be forecast. This can help to take encounter catastrophic flood events early on. As a result, flood damage can be minimized or even avoided. As scientists improve their predictions for the North Atlantic Oscillation, society will be better prepared for future flooding.

(Photo: Wikipedia, Stefan Penninger, Sweden)

Posted on

Ammonia energy storage #2

Recently, we reported on plans by Australian entrepreneurs and their government to use ammonia (NH3) to store excess wind energy. We proposed converting ammonia and CO2 from wastewater into methane gas (CH4), because it is more stable and easier to transport. The procedure follows the chemical equation:

8 NH3 + 3 CO2 → 4 N2 + 3 CH4 + 6 H2O

Now we have published a scientific article in the online magazine Frontiers in Energy Research where we show that the process is thermodynamically possible and does indeed occur. Methanogenic microbes in anaerobic digester sludge remove the hydrogen (H2) formed by electrolysis from the reaction equilibrium. As a result, the redox potentials of the oxidative (N2/NH3) and the reductive (CO2/CH4) half-reactions come so close that the process becomes spontaneous. It requires a catalyst in the form of wastewater microbes.

Pourbaix diagram of ammonium oxidation, hydrogen formation and CO2 reduction. At pH 7 and higher, the oxidation of ammonium coupled to methanogenesis becomes thermodynamically possible.

To prove our idea, we first searched for the right microbes that could carry out ammonia oxidation. For our experiments in microbial electrolysis cells we used microorganisms from sediments of the Atlantic Ocean off Namibia as starter cultures. Marine sediments are particularly suitable because they are relatively rich in ammonia, free from oxygen (O2) and contain less organic carbon than other ammonia-rich environments. Excluding oxygen is important because it used by ammonia-oxidizing microbes in a process called nitrification:

2 NH3+ + 3 O2 → 2 NO2 + 2 H+ + 2 H2O

Nitrification would have caused an electrochemical short circuit, as the electrons are transferred from the ammonia directly to the oxygen. This would have bypassed the anode (the positive electron accepting electrode) and stored the energy of the ammonia in the water − where it is useless. This is because, anodic water oxidation consumes much more energy than the oxidation of ammonia. In addition, precious metals are often necessary for water oxidation. Without producing oxygen at the anode, we were able to show that the oxidation of ammonium (the dissolved form of ammonia) is coupled to the production of hydrogen.

Oxidation of ammonium to nitrogen gas is coupled to hydrogen production in microbial electrolysis reactors. The applied potentials are +550 mV to +150 mV

It was important that the electrochemical potential at the anode was more negative than the +820 mV required for water oxidation. For this purpose, we used a potentiostat that kept the electrochemical potential constant between +550 mV and +150 mV. At all these potentials, N2 was produced at the anode and H2 at the cathode. Since the only source of electrons in the anode compartment was ammonium, the electrons for hydrogen production could come only from the ammonium oxidation. In addition, ammonium was also the only nitrogen source for the production of N2. As a result, the processes would be coupled.

In the next step, we wanted to show that this process also has a useful application. Nitrogen compounds are often found in wastewater. These compounds consist predominantly of ammonium. Among them are also drugs and their degradation products. At the same time, 1-2% of the energy produced worldwide is consumed in the Haber-Bosch process. In the Haber-Bosch process N2 is extracted from the air to produce nitrogen fertilizer. Another 3% of our energy is then used to remove the same nitrogen from our wastewater. This senseless waste of energy emits 5% of our greenhouse gases. In contrast, wastewater treatment plants could be net energy generators. In fact, a small part of the energy of wastewater has been recovered as biogas for more than a century. During biogas production, organic material from anaerobic digester sludge is decomposed by microbial communities and converted into methane:

H3C−COO + H+ + H2O → CH4 + HCO3 + H+; ∆G°’ = −31 kJ/mol (CH4)

The reaction produces CO2 and methane at a ratio of 1:1. Unfortunately, the CO2 in the biogas makes it almost worthless. As a result, biogas is often flared off, especially in places where natural gas is cheap. The removal of CO2 would greatly enhance the product and can be achieved using CO2 scrubbers. Even more reduced carbon sources can shift the ratio of CO2 to CH4. Nevertheless, CO2 would remain in biogas. Adding hydrogen to anaerobic digesters solves this problem technically. The process is called biogas upgrading. Hydrogen could be produced by electrolysis:

2 H2O → 2 H2 + O2; ∆G°’ = +237 kJ/mol (H2)

Electrolysis of water, however, is expensive and requires higher energy input. The reason is that the electrolysis of water takes place at a relatively high voltage of 1.23 V. One way to get around this is to replace the water by ammonium:

2 NH4+ → N2 + 2 H+ + 3 H2; ∆G°’ = +40 kJ/mol (H2)

With ammonium, the reaction takes place at only 136 mV, which saves the respective amount of energy. Thus, and with suitable catalysts, ammonium could serve as a reducing agent for hydrogen production. Microorganisms in the wastewater could be such catalysts. Moreover, without oxygen, methanogens become active in the wastewater and consume the produced hydrogen:

4 H2 + HCO3 + H+ → CH4 + 3 H2O; ∆G°’ = –34 kJ/mol (H2)

The methanogenic reaction keeps the hydrogen concentration so low (usually below 10 Pa) that the ammonium oxidation proceeds spontaneously, i.e. with energy gain:

8 NH4+ + 3 HCO3 → 4 N2 + 3 CH4 + 5 H+ + 9 H2O; ∆G°’ = −30 kJ/mol (CH4)

This is exactly the reaction described above. Bioelectrical methanogens grow at cathodes and belong to the genus Methanobacterium. Members of this genus thrive at low H2 concentrations.

The low energy gain is due to the small potential difference of ΔEh = +33 mV of CO2 reduction compared to the ammonium oxidation (see Pourbaix diagram above). The energy captured is barely sufficient for ADP phosphorylationG°’ = +31 kJ/mol). In addition, the nitrogen bond energy is innately high, which requires strong oxidants such as O2 (nitrification) or nitrite (anammox) to break them.

Instead of strong oxidizing agents, an anode may provide the activation energy for the ammonium oxidation, for example when poised at +500 mV. However, such positive redox potentials do not occur naturally in anaerobic environments. Therefore, we tested whether the ammonium oxidation can be coupled to the hydrogenotrophic methanogenesis by offering a positive electrode potential without O2. Indeed, we demonstrated this in our article and have filed a patent application. With our method one could, for example, profitably remove ammonia from industrial wastewater. It is also suitable for energy storage when e.g. Ammonia synthesized using excess wind energy.