Posted on

Future challenges for wind energy

Many people believe that there is no need for improvement because wind turbines have been working for decades. Wind energy has the potential to be one of the world’s cheapest energy sources. In a recent article in the Science magazine, major challenges have been addressed to drive innovation in wind energy. Essentially three directions were identified:

  1. The better use of wind currents
  2. Structural and system dynamics of wind turbines
  3. Grid reliability of wind power

In order to make better use of wind currents, the air mass dynamics and its interactions with land and turbines must be understood. Our knowledge of wind currents in complex terrain and under different atmospheric conditions is very limited. We have to model these conditions more precisely so that the operation of large wind turbines becomes more productive and cheaper.

To gain more energy, wind turbines have grown in size. For example, when wind turbines share larger size areas with other wind turbines, the flow changes increasingly.

As the height of wind turbines increases, we need to understand the dynamics of the wind at these heights. The use of simplified physical models has allowed wind turbines to be installed and their performance to be predicted across a variety of terrain types. The next challenge is to model these different conditions so that wind turbines are optimized in order to be inexpensive and controllable, and installed in the right place.

The second essential direction is better understanding and research of the wind turbine structure and system dynamics . Today, wind turbines are the largest flexible, rotating machines in the world. The bucket lengths routinely exceed 80 meters. Their towers protrude well over 100 meters. To illustrate this, three Airbus A380s can fit in the area of ​​one wind turbine. In order to work under increasing structural loads, these systems are getting bigger and heavier which requires new materials and manufacturing processes. This is necessary due to the fact that scalability, transport, structural integrity and recycling of the used materials reach their limits.

In addition, the interface between turbine and atmospheric dynamics raises several important research questions. Many simplified assumptions on which previous wind turbines are based, no longer apply. The challenge is not only to understand the atmosphere, but also to find out which factors are decisive for the efficiency of power generation as well as for the structural security.

Our current power grid as third essential direction is not designed for the operation of large additional wind resources. Therefore, the gird will need has to be fundamentally different then as today. A high increase in variable wind and solar power is expected. In order to maintain functional, efficient and reliable network, these power generators must be predictable and controllable. Renewable electricity generators must also be able to provide not only electricity but also stabilizing grid services. The path to the future requires integrated systems research at the interfaces between atmospheric physics, wind turbine dynamics, plant control and network operation. This also includes new energy storage solutions such as power-to-gas.

Wind turbines and their electricity storage can provide important network services such as frequency control, ramp control and voltage regulation. Innovative control could use the properties of wind turbines to optimize the energy production of the system and at the same time provide these essential services. For example, modern data processing technologies can deliver large amounts of data for sensors, which can be then applied to the entire system. This can improve energy recording, which in return can significantly reduce operating costs. The path to realize these demands requires extensive research at the interfaces of atmospheric flow modeling, individual turbine dynamics and wind turbine control with the operation of larger electrical systems.

Advances in science are essential to drive innovation, cut costs and achieve smooth integration into the power grid. In addition, environmental factors must also be taken into account when expanding wind energy. In order to be successful, the expansion of wind energy use must be done responsibly in order to minimize the destruction of the landscape. Investments in science and interdisciplinary research in these areas will certainly help to find acceptable solutions for everyone involved.

Such projects include studies that characterize and understand the effects of the wind on wildlife. Scientific research, which enables innovations and the development of inexpensive technologies to investigate the effects of wild animals on wind turbines on the land and off the coast, is currently being intensively pursued. To do this, it must be understood how wind energy can be placed in such a way that the local effects are minimized and at the same time there is an economic benefit for the affected communities.

These major challenges in wind research complement each other. The characterization of the operating zone of wind turbines in the atmosphere will be of crucial importance for the development of the next generation of even larger, more economical wind turbines. Understanding both, the dynamic control of the plants and the prediction of the type of atmospheric inflow enable better control.

As an innovative company, Frontis Energy supports the transition to CO2-neutral energy generation.

Posted on

Turbocharged lithium batteries at high temperatures

One of the biggest hurdles for the electrification of road traffic is the long charging time for lithium batteries in electric vehicles. A recent research report has now shown that charging time can be reduced to 10 minutes while the battery is being heated.

A lithium battery can power a 320-kilometer trip after only 10 minutes of charging − provided that its temperature is higher than 60 °C while charging.

Lithium batteries that use lithium ions to generate electricity are slowly charged at room temperature. It takes more than three hours to charge, as opposed to three minutes to tank a car.

A critical barrier to rapid charging is the lithium plating, which normally occurs at high charging rates and drastically affects the life and safety of the batteries. Researchers at Pennsylvania State University in University Park are introducing an asymmetrical temperature modulation method that charges a lithium battery at an elevated temperature of 60 °C.

High-speed charging typically encourages lithium to coat one of the battery electrodes (lithium plating). This will block the flow of energy and eventually make the battery unusable. To prevent lithium deposits on the anodes, the researchers limited the exposure time at 60 °C to only ~10 minutes per cycle.

The researchers used industrially available materials and minimized the capacity loss at 500 cycles to 20%. A battery charged at room temperature could only be charged quickly for 60 cycles before its electrode was plated.

The asymmetrical temperature between charging and discharging opens up a new way to improve the ion transport during charging and at the same time achieve a long service life.

For many decades it was generally believed that lithium batteries should not be operated at high temperatures due to accelerated material degradation. Contrary to this conventional wisdom, the researchers introduced a rapid charging process that charges a cell at 60 °C and discharges the cell at a cool temperature. In addition, charging at 60 °C reduces the battery cooling requirement by more than 12 times.

In battery applications, the discharge profiles depend on the end user, while the charging protocol is determined by the manufacturer and can therefore be specially designed and controlled. The quick-charging process presented here opens up a new way of designing electrochemical energy systems that can achieve high performance and a long service life at the same time.

At Frontis Energy we also think that the new simple charging process is a promising method. We are looking forward to the market launch of this new rapid charging method.

(Photo: iStock)

Posted on

Bioelectrically engineered fuel produced by yeasts

Yeasts such as Saccharomyces cerevisiae are, as the name suggests, used for large scale production of beer and other alcoholic beverages. Their high salt and ethanol tolerance not only makes them useful for the production of beverages, but also suitable for the production of combustion fuels at high alcohol concentrations. Besides ethanol, long-chain fusel alcohols are of high interest for biofuel production as well. Bioethanol is already mixed with gasoline and thus improves the CO2 balance of internal combustion engines. This liquid biofuel is made from either starch or lignocellulose. The production and use of bioethanol supports local economies, reduces CO2 emissions and promotes self-sufficiency. The latter is especially important for resource-depleted landlocked countries.

In order to efficiently produce ethanol and other alcohols from lignocellulose hydrolysates, yeasts must use both glucose and pentoses such as xylose and arabinose. This is because biomass is rich in both lignocellulose and thus glucose and xylose. However, this is also the main disadvantage of using Saccharomyces cerevisiae because it does not ferment xylose. Consequently, the identification of another yeast strains capable of fermenting both these sugars could solve the problem. Highly efficient yeasts can be grown in co-cultures with other yeasts capable of lignocellulose fermentation for ethanol production. Such a yeast is, for example, Wickerhamomyces anomalous.

To further improve ethanol production, bioelectric fermentation technology supporting traditional fermentation can be used. The microbial metabolism can thus be controlled electrochemically. There are many benefits of this technology. The fermentation process becomes more selective due to the application of an electrochemical potential. This, in turn, increases the efficiency of sugar utilization. In addition, the use of additives to control the redox equilibrium and the pH is minimized. Ultimately cell growth can be stimulated, further increasing alcohol production.

Such bioelectric reactors are galvanic cells. The electrodes used in such a bioelectric reactor may act as electron acceptors (anodes) or source (cathodes). Such electrochemical changes affect the metabolism and cell regulation as well as the interactions between the yeasts used. Now, a research group from Nepal (a resource-depleted landlocked country) has used new yeast strains of Saccharomyces cerevisiae and Wickerhamomyces anomalous in a bioelectric fermenter to improve ethanol production from biomass. The results were published in the journal Frontiers in Energy Research.

For their study, the researchers chose Saccharomyces cerevisiae and Wickerhamomyces anomalus as both are good ethanol producers. The latter is to be able to convert xylose to ethanol. After the researchers applied a voltage to the bioelectrical system, ethanol production doubled. Both yeasts formed a biofilm on the electrodes, making the system ideal for use as a flow-through system because the microorganisms are not washed out.

Saccharomyces cerevisiae cells in a brightfield microscopic image of 600-fold magnification (Foto: Amanda Luraschi)

The researchers speculated that the increased ethanol production was due to the better conversion of pyruvate to ethanol − the yeast’s central metabolic mechanism. The researchers attributed this to accelerated redox reactions at the anode and cathode. The applied external voltage polarized the ions present in the cytosol, thus facilitating the electron transfer from the cathode. This and the accelerated glucose oxidation probably led to increased ethanol production.

Normally, pyruvate is converted into ethanol in fermentation yeast. External voltage input can control the kinetics of glucose metabolism in Saccharomyces cerevisiae under both aerobic and anaerobic conditions. Intracellular and transplasmembrane electron transfer systems play an important role in electron transport across the cell membrane. The electron transfer system consists of cytochromes and various redox enzymes, which confer redox activity to the membrane at certain sites.

The authors also found that an increased salt concentration improved conductivity and therefore ethanol production. The increased ethanol production from lignocellulosic biomass may have been also be due to the presence of various natural compounds that promoted yeast growth. When the cellulose acetate membrane was replaced by a Nafion™ membrane, ethanol production also increased. This was perhaps due to improved transport of xylose through the Nafion™ membrane as well as the decrease of the internal resistance. A further increase of ethanol production was observed when the bioelectrical reactor was operated with fine platinum particles coated on the platinum anode and neutral red deposited on the graphite cathode.

Several yeast cultures from left to right: Saccharomyces cerevisiae, Candida utilis, Aureobasidium pullulans, Trichosporum cutaneum, Saccharomycopsis capsularis, Saccharomycopsis lipolytica, Hanseniaspora guilliermondii, Hansenula capsulata, Saccharomyces carlsbergensis, Saccharomyces rouxii, Rhodotorula rubra, Phaffia rhodozyba, Cryptococcus laurentii, Metschnikowia pulcherrima, Rhodotorula pallida

At Frontis Energy, we think that the present study is promising. However, long-chain fusel alcohols should be considered in the future as they are less volatile and better compatible with current internal combustion engines. These can also be easily converted into the corresponding long-chain hydrocarbons.

Posted on

Framework for a global carbon budget

Over the past decade, numerous studies have shown that global warming is roughly proportional to the concentration of CO2 in our atmosphere. In this way one can estimate our remaining carbon budget. This is the total amount of man-made carbon dioxide that can still be released into the atmosphere before reaching a set global temperature limit. The nations of the world agreed on this limit in the 2015 Paris Agreement. It should not exceed 1.5°C, and in any case be well below 2.0°C. However, diverging estimates have been made for the remaining carbon budget, which has a negative impact on policy-making. Now, an international research group of renown climate experts has published a framework for the calculation of the global CO2 budget in Nature. The researchers suggest that the application of this framework should help to overcome the differences when estimating the carbon budget, which will help to reduce uncertainties in research and policy.

Since the fifth report of the Intergovernmental Panel on Climate Change (IPCC), the concept of a carbon budget has become more important as an instrument for guiding climate policy. Over the past decade, a series of studies  has clarified why the increase in the global average temperature is roughly proportional to the total amount of CO2 emissions caused by human activity since the Industrial Revolution. In the framework, the research group cites numerous published documents that provide evidence for the linearity of this correlation. This literature has allowed scientists to define the linear relationship between warming and CO2 emissions as a transient climate response to cumulative CO2 emissions (TCRE). The linearity is an appealing concept because of the complexity of the Earth’s response to our CO2 emissions. Additional processes that affect future warming have been included in recent models, among them, for example, the thawing of the Arctic permafrost. These additional processes increase the uncertainty of current climate  models. In addition, global warming is not just caused by CO2 emissions. Other greenhouse gases, such as methane, fluorinated gases or nitrous oxide, as well as aerosols and their precursors affect global temperatures. This further complicates the relationship between future CO2.

In the case of global warming caused by CO2, every tonne contributes to warming, whether that ton is emitted in future, now or in the last century. This means that global CO2 emissions must be reduced to zero, and then remain zero. This also means that the more we emit in the next years, the faster we have to reduce our emissions later. At zero emissions, warming would stabilize, but not disappear. It may also reverse. An overdraft of the carbon budget would have to be compensated by removing the CO2 later. One way of removing CO2 from the atmosphere would be a technology called direct air capture, which we reported earlier. Ultimately, this will probably be the only way left, as carbon neutral renewable energy source sources only make up 5% of our energy mix. Establishing a global carbon budget will further highlights the urgency of our clean energy transition. Unfortunately, there is a large divergence when it comes the amount of the CO2 remaining in our carbon budget. In their framework, the researchers cite numerous studies on carbon budgets to maintain our 1.5°C target. Starting 2018, these range from 0 tonnes of CO2 to 1,000 gigatons. For the 2.0°C target, our carbon budget ranges from around 700 gigatons to nearly 2,000 gigatons of remaining CO2 emissions. The aim of the researchers is to limit this uncertainty by establishing a budget framework. The central element is the equation for calculating the remaining carbon budget:

Blim = (TlimThistTnonCO2Thist) / TCRE − EEsfb

The budget of the remaining CO2 emissions (Blim) for the specific temperature limit (Tlim) is a function of five terms that represent aspects of the geophysical and human-environment systems: the historical man-made warming (Thist), the non-CO2 contribution to the future temperature increase (TnonCO2), the zero emission commitment (TZEC), the TCRE, and an adaptation for sources from possible unrepresented Earth system feedback (EEsfb).

 

Term Key choices or uncertainties Type Level of understanding
Temperature limit Tlim Choice of temperature metrics that allow global warming, the choice of pre-industrial reference and consistency with global climate targets Choice Medium to high
Historical man-made warming Thist Incomplete data and methods for estimating the man-made component; see also Tlim Choice and uncertainty Medium to high
Non-CO2 contribution to future global warming TnonCO2 The level of non-CO2 contributions coinciding with global net zero CO2 emissions; depends on policy choices, but also on the uncertainty of their implementation Choice and uncertainty Medium
Non-CO2 contribution to future global warming TnonCO2 Climate reaction to non-CO2 forcers, such as aerosols and methane Uncertainty Low to medium
Zero-emissions commitment TZEC The extent of the decadal zero emission commitment and near-zero annual carbon emissions Uncertainty Low
Transient climate response to cumulative emissions of CO2 TCRE TCRE uncertainty, linearity and cumulative CO2 emissions that affect temperature metrics of the TCRE estimate Uncertainty Low to medium
Transient climate response to cumulative emissions of CO2 TCRE Uncertainty of the TCRE linearity, value and distribution beyond peak heating which is affected by cumulative CO2 emissions reduction
Uncertainty Low
Unrepresented Earth system feedback mechanisms EEsfb Impact of permafrost thawing and duration as well as methane release from wetlands on geomodels and feedback Uncertainty Very low

In the CO2 budget, the unrepresented Earth system feedback (EEsfb) is arguably the greatest uncertainty. These feedback processes are typically associated with the thawing of permafrost and the associated long-term release of CO2 and CH4. However, other sources of feedback have been identified as well. This include, for example, the variations of CO2 uptake by the vegetation and the associated nitrogen availability. Further feedback processes involve changes in surface albedo, cloud cover, or fire conditions.

It remains a challenge to adequately characterize the uncertainties surrounding the estimates of our carbon budget. In some cases, the reason of these uncertainties is inaccurate knowledge of the underlying processes or inaccurate measurements. In other cases the terminology is used inconsistently. For better comparability and flexibility, the researchers propose to routinely measure global surface air temperature values. This method gives robust data for models and model runs over selected time periods. More detailed comparisons between published estimates of the carbon budget are currently difficult because the original data used for publication often are missing. The researchers therefore propose to provide these in the future along with publications.

Breaking down the carbon budget into its individual factors makes it possible to identify a number of promising pathways for future research. One area of ​​research that might advance this field is to look more closely at the TCRE. Future research is expected to narrow down the range of TCRE uncertainties. Another promising area of ​​research is the study of the correlation between individual factors and their associated uncertainties, for example, between uncertainties in Thist and TnonCO2. This could be achieved by developing methods that allow a more reliable estimate of historical human-induced warming. It is also clear that less complex climate models are useful to further reduce the uncertainties of climate models, and hence the carbon budget. Currently, each factor of the framework presented by yhr researchers has its own uncertainties, and there is no method to formally combine them.

At Frontis Energy, too, we think that progress in these areas would improve our understanding of the estimates of our carbon budget. A systematic understanding of the carbon budget and is crucial for effectively addressing global warming challenges.

Posted on

Cheap, high-octane biofuel discovered

Researchers from the National Renewable Energy Laboratory (NREL) have developed a cheap method for producing high-octane gasoline from methanol. They recently published their method in the journal Nature Catalysis. Methanol can be synthesized from CO2 via various routes, as we reported last year. Biomass, such as wood, is one possibility.

The production of biofuels from wood, however, is too expensive to compete with fossil fuels. To find a solution to this problem, the researchers combined their basic research with an economic analysis. The researchers initially aimed at the most expensive part of the process. Thereafter, the researchers found methods to reduce these costs with methanol as an intermediate.

So far, the cost of converting methanol to gasoline or diesel was about $1 per gallon. The researchers have now reached a price of about $0.70 per gallon.

The catalytic conversion of methanol into gasoline is an important research area in the field of CO2 recovery. The traditional method is based on multi-stage processes and high temperatures. It is expensive, producing low quality fuel in small quantities. Thus, it is not competitive with petroleum-based fuels.

Hydrogen deficiency was the initially problem the researcher had to overcome. Hydrogen is the key energy containing element in hydrocarbons. The researchers hypothesized that using the transition metal copper would solve this problem, which it did. They estimated that the copper-infused catalyst resulted in 38% more yield at lower cost.

By facilitating the reintegration of C4 byproducts during the homologation of dimethyl ether, the copper zeolite catalyst enabled this 38% increase in product yield and a 35% reduction in conversion cost compared to conventional zeolite catalysts. Alternatively, C4 by-products were passed to a synthetic kerosene meeting five specifications for a typical jet fuel. Then, the fuel synthesis costs increased slightly. Even though the cost savings are minimal, the resulting product has a higher value.

Apart from the costs, the new process offers users further competitive advantages. For example, companies can compete with ethanol producers for credits for renewable fuels (if the carbon used comes from biogas or household waste). The process is also compatible with existing methanol plants that use natural gas or solid waste to produce syngas.

Posted on

Semiconductive nanotubes with photovoltaic effect

Cost-effective and efficient methods for converting sunlight into electricity are the focus of green energy research. Solar cells developed for this purpose are currently made of semiconductors such as silicon. Electrical energy is generated at the junction between two different semiconductors. However, the efficiency of these solar cells has almost reached its theoretical limit. New methods of converting sunlight into electricity must be found if solar energy is to be used as a major source of electricity. An international research team from Germany, Japan and Israel has now made important progress in this direction. Zhang and colleagues recently published their findings in the prestigious journal Nature. They demonstrate a transition-free solar cell that can be made by applying a more atomic semiconductor layer into a nanotube.

In a conventional solar cell, two regions of a semiconductor are doped with different chemical elements. The electrical current is generated by the negatively charged electrons of a region and by the positively charged electron holes (holes). At the junction between these two areas, an electric field is created. When sunlight is absorbed at this junction, electron-hole pairs are formed. The electrons and holes are then separated by the resulting electric field, generating an electric current. This conversion of solar energy into electricity is called photovoltaic effect. This photovoltaic effect is particularly important for green energy production. Its efficiency has almost reached the theoretical limit as mentioned above.

In technical terms, the photovoltaic effect occurs at traditional pn junctions, where a p-type material (with an excess of holes) adjoins an n-type material (with an excess of electrons). Current is generated in the photo-induced generation of electron-hole pairs and their subsequent separation. Further advances are expected through the use of other photovoltaic effects that do not require transition and only occur in crystals with broken inversion symmetry. However, the practical implementation of these effects is impeded by the low efficiency of the materials. Semiconductors with reduced dimensionality or smaller band gap have shown to be more efficient. Transition metal dichalcogenides (TMDs) are, for example, two-dimensional small-bandgap semiconductors in which various effects were observed by breaking the inversion symmetry in their bulk crystals.

The reported bulk photovoltaic effect (BPVE) is based on tungsten disulfide, a member of the TMD family. Crystals of this material have a layered structure and can be stratified in layers similar to graphite. The resulting atomic sheets can then be rolled into tubes of 100 nanometers by chemical methods. The authors produced photovoltaic devices from three types of tungsten disulfide: a monolayer, a bilayer and a nanotube.

A systematic reduction in crystal symmetry has been achieved beyond mere fractional symmetry inversion. The transition from a two-dimensional monolayer to a nanotube with polar properties has been significantly improved. The photovoltaic current density produced is orders of magnitude greater than that of other comparable materials. The results not only confirm the potential of TMD-based nanomaterials, but also the importance of reducing crystal symmetry for improving the BPVE.

While the nanotube devices had a large BPVE, the single-layer and two-layer devices produced only a negligible electric current under illumination. The researchers attribute the different performance characteristics of the solar cells to their pronounced crystal symmetry. This way, one can spontaneously generate a current in uniform semiconductors, without a transition.

The BPVE was first observed in 1956 at Bell Labs, New Jersey, just two years after the invention of modern silicon solar cells. The effect is limited to non-centrosymmetric materials characterized by a lack of symmetry in spatial inversion. That is, the combination of a 180° rotation and a reflection. The effect has two attractive properties: the current generated by light depends on the polarization of the incident light and the associated voltage is greater than the band gap of the material. This is the energy required to excite conducting free electrons. However, the effect typically has a low conversion efficiency and was therefore of rather academic than industrial interest.

To achieve high efficiency, a photovoltaic material must have high light absorption and low internal symmetry. However, these two properties usually do not exist simultaneously in a given material. Semiconductors that absorb most of the incident sunlight generally have high symmetry. This reduces or even prevents the effect. Low-symmetry materials, such as perovskite oxides, absorb little sunlight due to their large band gap. To circumvent this problem, efforts have been made to improve light absorption in low-symmetry materials, for example by using the mentioned doping. Meanwhile, it has been shown that the effect can occur in semiconductors by using mechanical fields to adjust the crystal symmetry of the material.

The newly discovered solution is encouraging with regard to the production of high absorption semiconducting nanotubes. In the case of tungsten disulfide, the crystal symmetry of the nanotubes is reduced compared to the mono- and bilayers due to the curved walls of the tube. The combination of excellent light absorption and low crystal symmetry means that the nanotubes have a significant photovoltaic effect. The current density exceeds that of materials which are inherently low in symmetry. Nevertheless, the conversion efficiency achieved is still much lower than that of the photovoltaic effect in conventional junction-based solar cells.

The authors’ findings demonstrate the great potential of nanotubes in solar energy production and raise various technological and scientific challenges. From an application’s perspective, it would be useful to produce a solar cells that consists of a large arrays of semiconductor nanotubes to check whether the approach is scalable. The direction of the generated current would be largely determined by the internal symmetry of the material. Therefore, uniform symmetry across the nanotube array would be required to create a collective current. These currents could cancel each other out.

At Frontis Energy, we wonder if the method described could work with the classic photovoltaic effect in the same solar cell. That would possibly increase overall efficiency. The two effects could use the solar energy consecutively. Despite the remaining challenges, the presented work offers a possibility for the development of highly efficient solar cells.

(Photo: Wikipedia)

Posted on

Machine learning makes smarter batteries

Renewable energies, such as wind and solar energy are naturally intermittent. To balance their demand and supply, batteries of, for example, electric vehicles can be charged and act as an energy buffer for the power grid. Cars spend most of their time idle and could, at the same time, feed their electricity back into the grid. While this is still a dream of the future, commercialization of electric and hybrid vehicles is already creating a growing demand for long-lasting batteries, both for driving as well as grid buffering. Consequently, methods for evaluating the state of the battery will become increasingly important.

The long duration of battery health tests is a problem, hindering the rapid development of new batteries. Better battery life forcasting methods are therefore urgently needed but are extremely difficult to develop. Now, Severson and her colleagues report in the journal Nature Energy that machine learning can help to predict computer battery life by creating computer models. The published algorithms use data from early-stage charge and discharge cycles.

Normally, a figure of merit describes the health of a battery. It quantifies the ability of the battery to store energy relative to its original state. The health status is 100% when the battery is new and decreases with time. This is similar to the state of charge of a battery. Estimating the state of charge of a battery is, in turn, important to ensure safe and correct use. However, there is no consensus in the industry and science as to what exactly a battery’s health status is or how it should be determined.

The state of health of a battery reflects two signs of aging: progressive capacity decline and impedance increase (another measure of electrical resistance). Estimates of the state of charge of a battery must therefore take into account both the drop in capacity and the increase in impedance.

Lithium ion batteries, however, are complex systems in which both capacity fade and impedance increase are caused by multiple interacting processes. Most of these processes cannot be studied independently since they often occur in simultaneously. The state of health can therefore not be determined from a single direct measurement. Conventional health assessment methods include examining the interactions between the electrodes of a battery. Since such methods often intervene directly in the system “battery”, they make the battery useless, which is hardly desired.

A battery’s health status can also be determined in less invasive ways, for example using adaptive models and experimental techniques. Adaptive models learn from recorded battery performance data and adjust themselves. They are useful if system-specific battery information are not available. Such models are suitable for the diagnosis of aging processes. The main problem, however, is that they must be trained with experimental data before they can be used to determine the current capacity of a battery.

Experimental techniques are used to evaluate certain physical processes and failure mechanisms. This allows the rate of future capacity loss to be estimated. Unfortunately, these methods can not detect any intermittent errors. Alternative techniques use the rate of voltage or capacitance change (rather than raw voltage and current data). In order to accelerate the development of battery technology, further methods need to be found which can accurately predict the life of the batteries.

Severson and her colleagues have created a comprehensive data set that includes the performance data of 124 commercial lithium-ion batteries during their charge and discharge cycles. The authors used a variety of rapid charging conditions with identical discharge conditions. This method caused a change of the battery lives. The data covered a wide range of 150 to 2,300 cycles.

The researchers then used machine learning algorithms to analyze the data, creating models that can reliably predict battery life. After the first 100 cycles of each experimentally characterized battery their model already showed clear signs of a capacity fade. The best model could predict the lifetime of about 91% data sets studied in the study. Using the first five cycles, batteries could be classified into categories with short (<550 cycles) or long lifetimes.

The researchers’ work shows that data-driven modeling using machine learning allows forecasting the state of health of lithium-ion batteries. The models can identify aging processes that do not otherwise apparent in capacity data during early cycles. Accordingly, the new approach complements the previous predictive models. But at Frontis Energy, we also see the ability to combine generated data with models that predict the behavior of other complex dynamic systems.

(Photo: Wikipedia)

Posted on

Faster photoelectrical hydrogen

Achieving high current densities while maintaining high energy efficiency is one of the biggest challenges in improving photoelectrochemical devices. Higher current densities accelerate the production of hydrogen and other electrochemical fuels.

Now a compact, solar-powered, hydrogen-producing device has been developed that provides the fuel at record speed. In the journal Nature Energy, the researchers around Saurabh Tembhurne describe a concept that allows capturing concentrated solar radiation (up to 474 kW/m²) by thermal integration, mass transport optimization and better electronics between the photoabsorber and the electrocatalyst.

The research group of the Swiss Federal Institute of Technology in Lausanne (EPFL) calculated the maximum increase in theoretical efficiency. Then, they experimentally verified the calculated values ​​using a photoabsorber and an iridium-ruthenium oxide-platinum based electrocatalyst. The electrocatalyst reached a current density greater than 0.88 A/cm². The calculated conversion efficiency of solar energy into hydrogen was more than 15%. The system was stable under various conditions for more than two hours. Next, the researchers want to scale their system.

The produced hydrogen can be used in fuel cells for power generation, which is why the developed system is suitable for energy storage. The hydrogen-powered generation of electricity emits only pure water. However, the clean and fast production of hydrogen is still a challenge. In the photoelectric method, materials similar to those of solar modules were used. The electrolytes were based on water in the new system, although ammonia would also be conceivable. Sunlight reaching these materials triggers a reaction in which water is split into oxygen and hydrogen. So far, however, all photoelectric methods could not be used on an industrial scale.

2 H2O → 2 H2 + O2; ∆G°’ = +237 kJ/mol (H2)

The newly developed system absorbed more than 400 times the amount of solar energy that normally shines on a given area. The researchers used high-power lamps to provide the necessary “solar energy”. Existing solar systems concentrate solar energy to a similar degree with the help of mirrors or lenses. The waste heat is used to accelerate the reaction.

The team predicts that the test equipment, with a footprint of approximately 5 cm, can produce an estimated 47 liters of hydrogen gas in six hours of sunshine. This is the highest rate per area for such solar powered electrochemical systems. At Frontis Energy we hope to be able to test and offer this system soon.

(Photo: Wikipedia)

Posted on

Ammonia energy storage #3

As a loyal reader or loyal reader of our blog, you will certainly remember our previous publications on ammonia energy storage. There, we describe possible ways to extract ammonia from the air, as well as the recovery of its energy in the form of methane (patent pending WO2019/079908A1). Since global food production requires large amounts of ammonia fertilizers, technologies for extraction from air is already very mature. These technologies are essentially all based on the Haber-Bosch process, which was industrialized at the beginning of the last century. During this process, atmospheric nitrogen (N2) is reduced to ammonia (NH3). Despite the simplicity of the molecules involved, the cleavage of the strong nitrogen−nitrogen bonds in N2 and the resulting nitrogen−hydrogen bonds pose a major challenge for catalytic chemists. The reaction usually takes place under harsh conditions and requires a lot of energy, i.e. high reaction temperatures, high pressures and complicated combinations of reagents, which are also often expensive and energy-intensive to manufacture.

Now, a research group led by Yuya Ashida has published an article in the renowned journal Nature, in which they show that a samarium compound in aqueous solution combined with a molybdenum catalyst can form ammonia from atmospheric nitrogen. The work opens up new possibilities in the search for ways to ammonia synthesis under ambient conditions. Under such conditions, less energy is required to produce ammonia, resulting in higher energy efficiency for energy storage. In today’s Haber-Bosch process, air and hydrogen gas are combined via an iron catalyst. The resulting global ammonia production of this process ranges from 250 to 300 tonnes per minute, delivering fertilizers that provide nearly 60% of the world’s population (The Alchemy of Air, available at Amazon).

Comparison of different approaches to produce ammonia. Top: In the industrial Haber-Bosch synthesis of ammonia (NH3), nitrogen gas (N2) reacts with hydrogen molecules (H2), typically in the presence of an iron catalyst. The process requires high temperatures and pressures, but is thermodynamically ideal because only little energy is wasted on side reactions. Center: Nitrogenase enzymes catalyze the reaction of six-electron (e) nitrogen and six protons (H+) under ambient conditions to form ammonia. However, two additional electrons and protons form one molecule of H2. The conversion of ATP (the biological energy “currency”) into ADP drives the reaction. This reaction has a high chemical overpotential. It consumes much more energy than is needed for the actual ammonia forming reaction. Bottom: In the new reaction proposed by Ashida and colleagues, a mixture of water and samarium diiodide (SmI2) is converted to ammonia using nitrogen under ambient conditions and in the presence of a molybdenum catalyst. SmI2 weakens the O−H bonds of the water and generates the hydrogen atoms, which then react with atmospheric nitrogen.

On industrial scale, ammonia is synthesized at temperatures that exceed 400°C and pressures of approximately 400 atmospheres. These conditions are often referred to as “harsh”. During the early days, these harsh conditions were difficult to control. Fatal accidents were not uncommon in the early years of the Haber-Bosch development. This has motivated many chemists to find “milder” alternatives. After all, this always meant searching for new catalysts to lower operating temperatures and pressures. The search for new catalysts would ultimately reduce capital investment in the construction of new fertilizer plants. Since ammonia synthesis is one of the largest producers of carbon dioxide, this would also reduce the associated emissions.

Like many other chemists before them, the authors have been inspired by nature. Nitrogenase enzymes carry out the biological conversion of atmospheric nitrogen into ammonia, a process called nitrogen fixation. On recent Earth, this process is the source of nitrogen atoms in amino acids and nucleotides, the elemental building blocks of life. In contrast to the Haber-Bosch process, nitrogenases do not use hydrogen gas as a source of hydrogen atoms. Instead, they transfer protons (hydrogen ions, H+) and electrons (e) to each nitrogen atom to form N−H bonds. Although nitrogenases fix nitrogen at ambient temperature, they use eight protons and electrons per molecule N2. This is remarkable because the stoichiometry of the reaction requires only six each. This way, nitrogenases provide the necessary thermodynamic drive for nitrogen fixation. The excess of hydrogen equivalents means that nitrogenases have a high chemical overpotential. That is, they consume much more energy than would actually be needed for nitrogen fixation.

The now published reaction is not the first attempt to mimic the nitrogenase reaction. In the past, metal complexes were used with proton and electron sources to convert atmospheric nitrogen into ammonia. The same researchers have previously developed 8 molybdenum complexes that catalyze nitrogen fixation in this way. This produced 230 ammonia molecules per molybdenum complex. The associated overpotentials were significant at almost 1,300 kJ per mole nitrogen. In reality, however, the Haber-Bosch process is not so energy-intensive given the right catalyst is used.

The challenge for catalysis researchers is to combine the best biological and industrial approaches to nitrogen fixation so that the process proceeds at ambient temperatures and pressures. At the same time, the catalyst must reduce the chemical overpotential to such an extent that the construction of new fertilizer plants no longer requires such high capital investments. This is a major challenge as there is no combination of acids (which serve as a proton source) and reducing agents (the electron sources) available for the fixation at the thermodynamic level of hydrogen gas. This means that the mixture must be reactive enough to form N−H bonds at room temperature. In the now described pathway with molybdenum and samarium, the researchers have adopted a strategy in which the proton and electron sources are no longer used separately. This is a fundamentally new approach to catalytic ammonia synthesis. It makes use of a phenomenon known as coordination-induced bond weakening. In the proposed path, the phenomenon is based on the interaction of samarium diiodide (SmI2) and water.

Water is stable because of its strong oxygen-hydrogen bonds (O−H). However, when the oxygen atom in the water is coordinated with SmI2, it exposes its single electron pair and its O−H bonds are weakened. As a result, the resulting mixture becomes a readily available source of hydrogen atoms, protons and electrons, that is. The researchers around Yuya Ashida use this mixture with a molybdenum catalyst to fix nitrogen. SmI2-water mixtures are therefore particularly suitable for this type of catalysis. In them, a considerable coordination-induced bond weakening was previously measured, which was used inter alia for the production of carbon-hydrogen bonds.

The extension of this idea to catalytic ammonia synthesis is remarkable for two reasons. First, the molybdenum catalyst facilitates ammonia synthesis in aqueous solution. This is amazing because molybdenum complexes in water are usually degraded. Second, the use of coordination-induced bond weakening provides a new method for nitrogen fixation at ambient conditions. This also avoids the use of potentially hazardous combinations of proton and electron sources which are a fire hazard. The authors’ approach also works when ethylene glycol (HOCH2CH2OH) is used instead of water. Thus, the candidates for proton and electron sources are extended by an additional precursor.

Ashida and colleagues propose a catalytic cycle for their process in which the molybdenum catalyst initially coordinates to nitrogen and cleaves the N−N bond to form a molybdenum nitrido complex. This molybdenum nitrido complex contains the molybdenum-nitrogen triple bond. The SmI2-water mixture then delivers hydrogen atoms to this complex, eventually producing ammonia. The formation of N−H bonds with molybdenum nitrido complexes represents a significant thermodynamic challenge since the N−H bonds are also weakened by the molybdenum. Nevertheless, the disadvantages are offset by the reduction of the chemical overpotential. The SmI2 not only facilitates the transfer of hydrogen atoms, but also keeps the metal in a reduced form. This prevents undesired molybdenum oxide formation in aqueous solution.

The new process still has significant operational hurdles to overcome before it can be used on an industrial scale. For example, SmI2 is used in large quantities, which generates a lot of waste. The separation of ammonia from aqueous solutions is difficult in terms of energy consumption. However, if the process were used for energy storage in combination with our recovery method, the separation would be eliminated from the aqueous solution. Finally, there is still a chemical overpotential of about 600 kJ/mol. Future research should focus on finding alternatives to SmI2. These could be based, for example, on metals that occur more frequently than samarium and promote coordination-induced bond weakening as well. As Fritz Haber and Carl Bosch have experienced, the newly developed method will probably take some time for development before it becomes available on industrial scale.

(Photo: Wikipedia)

Posted on

Melting ice sheets in Greenland contribute 25% to sea level rise

Recently we reported the loss of snow cover in Europe. The snow is not only gone in many parts of Europe, also Greenland’s ice cover is melting. The Greenland ice sheet contributes 25% to global sea-level rise. This makes it the largest contribution of the cryosphere. The increased mass loss of Greenland ice during the 21st century is mainly due to the increased surface water runoff, of which ~93% come directly from the small ablation zone of the ice sheet (~22% of the ice surface). As the snow melts in the summer, bare glacier ice is more exposed in this ablation zone. Naked ice is darker and less porous than snow. It absorbs more than twice the solar radiation while also holding back less meltwater. Smooth ice produces a large proportion (~78%) of the total outflow of Greenland into the sea, although in summer only a small area of ​​the ice is exposed. Accurately capturing the reduced albedo and the full extent of bare ice in climate models is critical to determining Greenland’s present and future runoff contribution to sea-level rise.

The mass loss of the Greenland ice sheet has recently increased due to the accelerated melting of its surface. As this melting is critically affected by surface albedo, understanding the processes and potential feedbacks regardinng the albedo is required for accurately forecasting mass loss. The resulting radiation variability of the ablation zone caused the ice layer to melt five times faster compared with hydrological and biological processes, which also darken the ice sheet. Variations in the snow limits due to the shallower ice layer at higher altitudes have an even greater impact on melt when the climate is warmer. As a result of these fluctuations, the mapped ice surface during the summer of 2012, the record year of snowmelt, was the largest and had an area of 300,050 km2. That is, bare ice accounted for 16% of the ice surface. The smallest extent of bare ice was 184,660 km2 and was observed in 2006. This corresponded to 10% of the ice surface, i.e. almost 40% less area than in 2012. However, the observed snowpack variation was high and the observation period was too short for a solid trend assessment.

Current climate models are too inaccurate in predicting the sea level rise during flood years, leading to uncertainty in the estimation of Greenland’s contribution to global sea level rise. To understand the factors that influence melting, Jonathan Ryan of Brown University, Providence, Rhode Island, and his colleagues have investigated Greenland’s snow line. At altitudes below the snow line, the darker ice is not covered by snow. This snow line moves up or down during Greenland’s seasons. The researchers mapped these movements between 2001 and 2017 using satellite images. The average height of the snow line at the end of the summer in 2009 was between 1,330 m and then 1,650 m in 2012. The fluctuations in the snow line are the most important factor when it comes to how much solar energy the ice sheet absorbs. Modelers must consider this effect to improve their predictions. Knowing how much and how fast the Greenland ice melts will help us to take better protective measures. At Frontis Energy, we think that the best protection against sea-level rise is the prevention and recycling of CO2.

(Photo: Wikipedia)