Posted on

Cheap, high-octane biofuel discovered

Researchers from the National Renewable Energy Laboratory (NREL) have developed a cheap method for producing high-octane gasoline from methanol. They recently published their method in the journal Nature Catalysis. Methanol can be synthesized from CO2 via various routes, as we reported last year. Biomass, such as wood, is one possibility.

The production of biofuels from wood, however, is too expensive to compete with fossil fuels. To find a solution to this problem, the researchers combined their basic research with an economic analysis. The researchers initially aimed at the most expensive part of the process. Thereafter, the researchers found methods to reduce these costs with methanol as an intermediate.

So far, the cost of converting methanol to gasoline or diesel was about $1 per gallon. The researchers have now reached a price of about $0.70 per gallon.

The catalytic conversion of methanol into gasoline is an important research area in the field of CO2 recovery. The traditional method is based on multi-stage processes and high temperatures. It is expensive, producing low quality fuel in small quantities. Thus, it is not competitive with petroleum-based fuels.

Hydrogen deficiency was the initially problem the researcher had to overcome. Hydrogen is the key energy containing element in hydrocarbons. The researchers hypothesized that using the transition metal copper would solve this problem, which it did. They estimated that the copper-infused catalyst resulted in 38% more yield at lower cost.

By facilitating the reintegration of C4 byproducts during the homologation of dimethyl ether, the copper zeolite catalyst enabled this 38% increase in product yield and a 35% reduction in conversion cost compared to conventional zeolite catalysts. Alternatively, C4 by-products were passed to a synthetic kerosene meeting five specifications for a typical jet fuel. Then, the fuel synthesis costs increased slightly. Even though the cost savings are minimal, the resulting product has a higher value.

Apart from the costs, the new process offers users further competitive advantages. For example, companies can compete with ethanol producers for credits for renewable fuels (if the carbon used comes from biogas or household waste). The process is also compatible with existing methanol plants that use natural gas or solid waste to produce syngas.

Posted on

Electrical energy storage

Electrical Energy Storage (EES) is the process of converting electrical energy from a power network into a form that can be stored for converting back to electricity when needed. EES enables electricity to be produced during times of either low demand, low generation cost, or during periods of peak renewable energy generation. This allows producers and transmission system operators (TSOs) the ability to leverage and balance the variance in supply/demand and generation costs by using stored electricity at times of high demand, high generation cost, and/or low generation capacity.
EES has many applications including renewables integration, ancillary services, and electrical grid support. This blog series aims to provide the reader with four aspects of EES:

  1. An overview of the function and applications of EES technologies,
  2. State-of-the-art breakdown of key EES markets in the European Union,
  3. A discussion on the future of these EES markets, and
  4. Applications (Service Uses) of EES.

Table: Some common service uses of EES technologies

Storage Category

Storage Technology

Pumped Hydro

Open Loop

Closed Loop

Electro-chemical

Batteries

Flow Batteries

Capacitors

Thermal Storage

 

Molten Salts

Heat

Ice

Chilled Water

Electro-mechanical

Compressed Air Energy Storage (CAES)

Flywheel

Gravitational Storage

Hydrogen Storage

 

Fuel Cells

H2 Storage

Power-to-Gas

Unlike any other commodities market, electricity-generating industries typically have little or no storage capabilities. Electricity must be used precisely when it is produced, with grid operators constantly balancing electrical supply and demand. With an ever-increasing market share of intermittent renewable energy sources the balancing act is becoming increasingly complex.

While EES is most often touted for its ability to help minimize supply fluctuations by storing electricity produced during periods of peak renewable energy generation, there are many other applications. EES is vital to the safe, reliable operation of the electricity grid by supporting key ancillary services and electrical grid reliability functions. This is often overlooked for the ability to help facilitate renewable energy integration. EES is applicable in all of the major areas of the electricity grid (generation, transmission & distribution, and end user services). A few of the most prevalent service uses are outlined in the Table above. Further explanation on service use/cases will be provide later in this blog, including comprehensive list of EES applications.

Area

Service Use / Case

Discharge Duration in h

Capacity in MW

Examples

Generation

Bulk Storage

4 – 6

1 – 500

Pumped hydro, CAES, Batteries

Contingency

1 – 2

1 – 500

Pumped hydro, CAES, Batteries

Black Start

NA

NA

Batteries

Renewables Firming

2 – 4

1 – 500

Pumped hydro, CAES, Batteries

Transmission & Distribution

Frequency & Voltage Support

0.25 – 1

1 – 10

Flywheels, Capacitors

Transmission Support

2 – 5 sec

10 – 100

Flywheels, Capacitors

On-site Power

8 – 16

1.5 kW – 5 kW

Batteries

Asset Deferral

3 – 6

0.25– 5

Batteries

End User Services

Energy Management

4 – 6

1 kW – 1 MW

Residential storage

Learn more about EES in the EU in the next post.

(Jon Martin, 2019)

Posted on

Semiconductive nanotubes with photovoltaic effect

Cost-effective and efficient methods for converting sunlight into electricity are the focus of green energy research. Solar cells developed for this purpose are currently made of semiconductors such as silicon. Electrical energy is generated at the junction between two different semiconductors. However, the efficiency of these solar cells has almost reached its theoretical limit. New methods of converting sunlight into electricity must be found if solar energy is to be used as a major source of electricity. An international research team from Germany, Japan and Israel has now made important progress in this direction. Zhang and colleagues recently published their findings in the prestigious journal Nature. They demonstrate a transition-free solar cell that can be made by applying a more atomic semiconductor layer into a nanotube.

In a conventional solar cell, two regions of a semiconductor are doped with different chemical elements. The electrical current is generated by the negatively charged electrons of a region and by the positively charged electron holes (holes). At the junction between these two areas, an electric field is created. When sunlight is absorbed at this junction, electron-hole pairs are formed. The electrons and holes are then separated by the resulting electric field, generating an electric current. This conversion of solar energy into electricity is called photovoltaic effect. This photovoltaic effect is particularly important for green energy production. Its efficiency has almost reached the theoretical limit as mentioned above.

In technical terms, the photovoltaic effect occurs at traditional pn junctions, where a p-type material (with an excess of holes) adjoins an n-type material (with an excess of electrons). Current is generated in the photo-induced generation of electron-hole pairs and their subsequent separation. Further advances are expected through the use of other photovoltaic effects that do not require transition and only occur in crystals with broken inversion symmetry. However, the practical implementation of these effects is impeded by the low efficiency of the materials. Semiconductors with reduced dimensionality or smaller band gap have shown to be more efficient. Transition metal dichalcogenides (TMDs) are, for example, two-dimensional small-bandgap semiconductors in which various effects were observed by breaking the inversion symmetry in their bulk crystals.

The reported bulk photovoltaic effect (BPVE) is based on tungsten disulfide, a member of the TMD family. Crystals of this material have a layered structure and can be stratified in layers similar to graphite. The resulting atomic sheets can then be rolled into tubes of 100 nanometers by chemical methods. The authors produced photovoltaic devices from three types of tungsten disulfide: a monolayer, a bilayer and a nanotube.

A systematic reduction in crystal symmetry has been achieved beyond mere fractional symmetry inversion. The transition from a two-dimensional monolayer to a nanotube with polar properties has been significantly improved. The photovoltaic current density produced is orders of magnitude greater than that of other comparable materials. The results not only confirm the potential of TMD-based nanomaterials, but also the importance of reducing crystal symmetry for improving the BPVE.

While the nanotube devices had a large BPVE, the single-layer and two-layer devices produced only a negligible electric current under illumination. The researchers attribute the different performance characteristics of the solar cells to their pronounced crystal symmetry. This way, one can spontaneously generate a current in uniform semiconductors, without a transition.

The BPVE was first observed in 1956 at Bell Labs, New Jersey, just two years after the invention of modern silicon solar cells. The effect is limited to non-centrosymmetric materials characterized by a lack of symmetry in spatial inversion. That is, the combination of a 180° rotation and a reflection. The effect has two attractive properties: the current generated by light depends on the polarization of the incident light and the associated voltage is greater than the band gap of the material. This is the energy required to excite conducting free electrons. However, the effect typically has a low conversion efficiency and was therefore of rather academic than industrial interest.

To achieve high efficiency, a photovoltaic material must have high light absorption and low internal symmetry. However, these two properties usually do not exist simultaneously in a given material. Semiconductors that absorb most of the incident sunlight generally have high symmetry. This reduces or even prevents the effect. Low-symmetry materials, such as perovskite oxides, absorb little sunlight due to their large band gap. To circumvent this problem, efforts have been made to improve light absorption in low-symmetry materials, for example by using the mentioned doping. Meanwhile, it has been shown that the effect can occur in semiconductors by using mechanical fields to adjust the crystal symmetry of the material.

The newly discovered solution is encouraging with regard to the production of high absorption semiconducting nanotubes. In the case of tungsten disulfide, the crystal symmetry of the nanotubes is reduced compared to the mono- and bilayers due to the curved walls of the tube. The combination of excellent light absorption and low crystal symmetry means that the nanotubes have a significant photovoltaic effect. The current density exceeds that of materials which are inherently low in symmetry. Nevertheless, the conversion efficiency achieved is still much lower than that of the photovoltaic effect in conventional junction-based solar cells.

The authors’ findings demonstrate the great potential of nanotubes in solar energy production and raise various technological and scientific challenges. From an application’s perspective, it would be useful to produce a solar cells that consists of a large arrays of semiconductor nanotubes to check whether the approach is scalable. The direction of the generated current would be largely determined by the internal symmetry of the material. Therefore, uniform symmetry across the nanotube array would be required to create a collective current. These currents could cancel each other out.

At Frontis Energy, we wonder if the method described could work with the classic photovoltaic effect in the same solar cell. That would possibly increase overall efficiency. The two effects could use the solar energy consecutively. Despite the remaining challenges, the presented work offers a possibility for the development of highly efficient solar cells.

(Photo: Wikipedia)

Posted on

EU market summary for energy storage

Electrical energy storage (EES) is not only a vital component in the reliable operation of modern electrical grids, but also a focal point of the global renewable energy transition. It has been often suggested that EES technologies could be the missing piece to eliminating the technical hurdles facing the implementation of intermittent renewable energy sources. In the following blog posts, selected EES markets within the European Union will be evaluated in detail.

With over 80 MW of installed wind and solar capacity, Germany is by far the leading EU nation in the renewable energy transition. However, experts have argued that Germany’s need for widespread industrial scale energy storage is unlikely to materialize in any significant quantity for up to 20-years. This is due to a number of factors. Germany’s geographic location and abundance of connections to neighbouring power grids makes exporting any electricity fluctuations relatively easy. Additionally, when Germany reaches its 2020 targets for wind and solar capacity (46 GW and 52 GW, respectively) the supply at a given time would generally not exceed 55 GW. Nearly all of this would be consumed domestically, with no/little need for storage.

When evaluating energy storage in the UK, a different story emerges. Being an isolated island nation there is considerably more focus on energy independence to go along with their low-carbon energy goals. However, the existing regulatory environment is cumbersome, and poses barriers significant enough to substantially inhibit the transition to a low-carbon energy sector – including EES. The UK government has acknowledged the existence of regulatory barriers and pledged to address them. As part of this effort, a restructuring of their power market to a capacity-based market is already underway. The outlook for EES in the UK is promising, there is considerable pressure from not only industry, but also the public and the government to continue developing EES facilities at industrial scale.

Italy, once heavily hydro-powered, has grown to rely on natural gas, coal, and oil for 50% of it’s electricity (gas representing 34% alone). The introduction of a solar FIT in 2005 lead to significant growth in the solar industry (Italy now ranks 2nd in per capita solar capacity globally) before the program ended in July 2014. In recent years there has been notable growth in electro-chemical EES capacity (~84 MW installed), primarily driven by a single large-scale project by TERNA, Italy’s transmission system operator (TSO). This capacity has made Italy the leader in EES capacity in the EU, however the market is to-date dominated by the large TSOs.

However, the combination of a reliance on imported natural gas, over 500,000 PV systems no longer collecting FIT premiums, and increasing electricity rates presents a unique market opportunity for residential power-to-gas in Italy.
Denmark is aggressively pursing a 100-percent renewable target for all sectors by 2050. While there is still no official roadmap policy on how they will get there, they have essentially narrowed it down to one of two scenario: a biomass-based scenario, or a wind + hydrogen based scenario. Under the hydrogen-based scenario there would be widespread investment to expand wind capacity and couple this capacity with hydrogen power-to-gas systems for bulk energy storage. With the Danish expertise and embodied investment in wind energy, one would expect that the future Danish energy system would be build around this strength, and hence require significant power-to-gas investment.

The renewable energy industry in Spain has completed stagnated due to retroactive policy changes and taxes on consumption of solar generated electricity introduced in 2015. The implementation of the Royal Decree 900/2015 on self-consumption has rendered PV systems unprofitable, and added additional fees and taxes for the use of EES devices. No evidence was found to suggest a market for energy storage will materialize in Spain in the near future.

The final country investigated was the Netherlands, which has been criticized by the EU for its lack of progress on renewable energy targets. With only 10% of Dutch electricity coming from renewable sources, there is currently little demand for large-scale EES. While the Netherlands may be lagging behind on renewable electricity targets, they have been a leader in EV penetration; a trend that will continue and see 1-million EVs on Dutch roads by 2025. In parallel with the EV growth, there has been a large surge in sub-100kW Li-ion installations for storing energy at electric vehicle (EV) charging stations. It is expected that these applications will continue to be the primary focus of EES in the Netherlands.

Similar to Italy, the Dutch rely heavily on natural gas for energy within their homes. This fact, coupled with an ever-increasing focus on energy independent and efficient houses could make the Netherlands a prime market for residential power-to-gas technologies.

Read more about electrical energy storage here.

Jon Martin, 2019

(Photo: NASA)

Posted on

Hydropower

Hydropower is electricity generated by the movement of water.

In the late 19th century, hydropower became an industrially efficient method of generating electricity. Waters falling from high altitudes, e.g. mountain streams or rivers, as well as strong currents are the best candidates for generating electricity from hydropower. This electricity is a considerable global energy source. It is generated by water entering a turbine which then rotates. When this turbine is connected to an electric generator, this mechanical energy is converted into electrical energy. The Niagara Falls and the Hoover Dam are two examples of electricity produced in this way.

Hydropower provides about 20% of the world’s electricity.

Hydropower has recently gained popularity. The World Bank called it a workable solution to keep up with growing energy needs while avoiding CO2 emissions.

(Photo: Wikipedia)

Posted on

Machine learning makes smarter batteries

Renewable energies, such as wind and solar energy are naturally intermittent. To balance their demand and supply, batteries of, for example, electric vehicles can be charged and act as an energy buffer for the power grid. Cars spend most of their time idle and could, at the same time, feed their electricity back into the grid. While this is still a dream of the future, commercialization of electric and hybrid vehicles is already creating a growing demand for long-lasting batteries, both for driving as well as grid buffering. Consequently, methods for evaluating the state of the battery will become increasingly important.

The long duration of battery health tests is a problem, hindering the rapid development of new batteries. Better battery life forcasting methods are therefore urgently needed but are extremely difficult to develop. Now, Severson and her colleagues report in the journal Nature Energy that machine learning can help to predict computer battery life by creating computer models. The published algorithms use data from early-stage charge and discharge cycles.

Normally, a figure of merit describes the health of a battery. It quantifies the ability of the battery to store energy relative to its original state. The health status is 100% when the battery is new and decreases with time. This is similar to the state of charge of a battery. Estimating the state of charge of a battery is, in turn, important to ensure safe and correct use. However, there is no consensus in the industry and science as to what exactly a battery’s health status is or how it should be determined.

The state of health of a battery reflects two signs of aging: progressive capacity decline and impedance increase (another measure of electrical resistance). Estimates of the state of charge of a battery must therefore take into account both the drop in capacity and the increase in impedance.

Lithium ion batteries, however, are complex systems in which both capacity fade and impedance increase are caused by multiple interacting processes. Most of these processes cannot be studied independently since they often occur in simultaneously. The state of health can therefore not be determined from a single direct measurement. Conventional health assessment methods include examining the interactions between the electrodes of a battery. Since such methods often intervene directly in the system “battery”, they make the battery useless, which is hardly desired.

A battery’s health status can also be determined in less invasive ways, for example using adaptive models and experimental techniques. Adaptive models learn from recorded battery performance data and adjust themselves. They are useful if system-specific battery information are not available. Such models are suitable for the diagnosis of aging processes. The main problem, however, is that they must be trained with experimental data before they can be used to determine the current capacity of a battery.

Experimental techniques are used to evaluate certain physical processes and failure mechanisms. This allows the rate of future capacity loss to be estimated. Unfortunately, these methods can not detect any intermittent errors. Alternative techniques use the rate of voltage or capacitance change (rather than raw voltage and current data). In order to accelerate the development of battery technology, further methods need to be found which can accurately predict the life of the batteries.

Severson and her colleagues have created a comprehensive data set that includes the performance data of 124 commercial lithium-ion batteries during their charge and discharge cycles. The authors used a variety of rapid charging conditions with identical discharge conditions. This method caused a change of the battery lives. The data covered a wide range of 150 to 2,300 cycles.

The researchers then used machine learning algorithms to analyze the data, creating models that can reliably predict battery life. After the first 100 cycles of each experimentally characterized battery their model already showed clear signs of a capacity fade. The best model could predict the lifetime of about 91% data sets studied in the study. Using the first five cycles, batteries could be classified into categories with short (<550 cycles) or long lifetimes.

The researchers’ work shows that data-driven modeling using machine learning allows forecasting the state of health of lithium-ion batteries. The models can identify aging processes that do not otherwise apparent in capacity data during early cycles. Accordingly, the new approach complements the previous predictive models. But at Frontis Energy, we also see the ability to combine generated data with models that predict the behavior of other complex dynamic systems.

(Photo: Wikipedia)

Posted on

Faster photoelectrical hydrogen

Achieving high current densities while maintaining high energy efficiency is one of the biggest challenges in improving photoelectrochemical devices. Higher current densities accelerate the production of hydrogen and other electrochemical fuels.

Now a compact, solar-powered, hydrogen-producing device has been developed that provides the fuel at record speed. In the journal Nature Energy, the researchers around Saurabh Tembhurne describe a concept that allows capturing concentrated solar radiation (up to 474 kW/m²) by thermal integration, mass transport optimization and better electronics between the photoabsorber and the electrocatalyst.

The research group of the Swiss Federal Institute of Technology in Lausanne (EPFL) calculated the maximum increase in theoretical efficiency. Then, they experimentally verified the calculated values ​​using a photoabsorber and an iridium-ruthenium oxide-platinum based electrocatalyst. The electrocatalyst reached a current density greater than 0.88 A/cm². The calculated conversion efficiency of solar energy into hydrogen was more than 15%. The system was stable under various conditions for more than two hours. Next, the researchers want to scale their system.

The produced hydrogen can be used in fuel cells for power generation, which is why the developed system is suitable for energy storage. The hydrogen-powered generation of electricity emits only pure water. However, the clean and fast production of hydrogen is still a challenge. In the photoelectric method, materials similar to those of solar modules were used. The electrolytes were based on water in the new system, although ammonia would also be conceivable. Sunlight reaching these materials triggers a reaction in which water is split into oxygen and hydrogen. So far, however, all photoelectric methods could not be used on an industrial scale.

2 H2O → 2 H2 + O2; ∆G°’ = +237 kJ/mol (H2)

The newly developed system absorbed more than 400 times the amount of solar energy that normally shines on a given area. The researchers used high-power lamps to provide the necessary “solar energy”. Existing solar systems concentrate solar energy to a similar degree with the help of mirrors or lenses. The waste heat is used to accelerate the reaction.

The team predicts that the test equipment, with a footprint of approximately 5 cm, can produce an estimated 47 liters of hydrogen gas in six hours of sunshine. This is the highest rate per area for such solar powered electrochemical systems. At Frontis Energy we hope to be able to test and offer this system soon.

(Photo: Wikipedia)

Posted on

Ammonia energy storage #3

As a loyal reader or loyal reader of our blog, you will certainly remember our previous publications on ammonia energy storage. There, we describe possible ways to extract ammonia from the air, as well as the recovery of its energy in the form of methane (patent pending WO2019/079908A1). Since global food production requires large amounts of ammonia fertilizers, technologies for extraction from air is already very mature. These technologies are essentially all based on the Haber-Bosch process, which was industrialized at the beginning of the last century. During this process, atmospheric nitrogen (N2) is reduced to ammonia (NH3). Despite the simplicity of the molecules involved, the cleavage of the strong nitrogen−nitrogen bonds in N2 and the resulting nitrogen−hydrogen bonds pose a major challenge for catalytic chemists. The reaction usually takes place under harsh conditions and requires a lot of energy, i.e. high reaction temperatures, high pressures and complicated combinations of reagents, which are also often expensive and energy-intensive to manufacture.

Now, a research group led by Yuya Ashida has published an article in the renowned journal Nature, in which they show that a samarium compound in aqueous solution combined with a molybdenum catalyst can form ammonia from atmospheric nitrogen. The work opens up new possibilities in the search for ways to ammonia synthesis under ambient conditions. Under such conditions, less energy is required to produce ammonia, resulting in higher energy efficiency for energy storage. In today’s Haber-Bosch process, air and hydrogen gas are combined via an iron catalyst. The resulting global ammonia production of this process ranges from 250 to 300 tonnes per minute, delivering fertilizers that provide nearly 60% of the world’s population (The Alchemy of Air, available at Amazon).

Comparison of different approaches to produce ammonia. Top: In the industrial Haber-Bosch synthesis of ammonia (NH3), nitrogen gas (N2) reacts with hydrogen molecules (H2), typically in the presence of an iron catalyst. The process requires high temperatures and pressures, but is thermodynamically ideal because only little energy is wasted on side reactions. Center: Nitrogenase enzymes catalyze the reaction of six-electron (e) nitrogen and six protons (H+) under ambient conditions to form ammonia. However, two additional electrons and protons form one molecule of H2. The conversion of ATP (the biological energy “currency”) into ADP drives the reaction. This reaction has a high chemical overpotential. It consumes much more energy than is needed for the actual ammonia forming reaction. Bottom: In the new reaction proposed by Ashida and colleagues, a mixture of water and samarium diiodide (SmI2) is converted to ammonia using nitrogen under ambient conditions and in the presence of a molybdenum catalyst. SmI2 weakens the O−H bonds of the water and generates the hydrogen atoms, which then react with atmospheric nitrogen.

On industrial scale, ammonia is synthesized at temperatures that exceed 400°C and pressures of approximately 400 atmospheres. These conditions are often referred to as “harsh”. During the early days, these harsh conditions were difficult to control. Fatal accidents were not uncommon in the early years of the Haber-Bosch development. This has motivated many chemists to find “milder” alternatives. After all, this always meant searching for new catalysts to lower operating temperatures and pressures. The search for new catalysts would ultimately reduce capital investment in the construction of new fertilizer plants. Since ammonia synthesis is one of the largest producers of carbon dioxide, this would also reduce the associated emissions.

Like many other chemists before them, the authors have been inspired by nature. Nitrogenase enzymes carry out the biological conversion of atmospheric nitrogen into ammonia, a process called nitrogen fixation. On recent Earth, this process is the source of nitrogen atoms in amino acids and nucleotides, the elemental building blocks of life. In contrast to the Haber-Bosch process, nitrogenases do not use hydrogen gas as a source of hydrogen atoms. Instead, they transfer protons (hydrogen ions, H+) and electrons (e) to each nitrogen atom to form N−H bonds. Although nitrogenases fix nitrogen at ambient temperature, they use eight protons and electrons per molecule N2. This is remarkable because the stoichiometry of the reaction requires only six each. This way, nitrogenases provide the necessary thermodynamic drive for nitrogen fixation. The excess of hydrogen equivalents means that nitrogenases have a high chemical overpotential. That is, they consume much more energy than would actually be needed for nitrogen fixation.

The now published reaction is not the first attempt to mimic the nitrogenase reaction. In the past, metal complexes were used with proton and electron sources to convert atmospheric nitrogen into ammonia. The same researchers have previously developed 8 molybdenum complexes that catalyze nitrogen fixation in this way. This produced 230 ammonia molecules per molybdenum complex. The associated overpotentials were significant at almost 1,300 kJ per mole nitrogen. In reality, however, the Haber-Bosch process is not so energy-intensive given the right catalyst is used.

The challenge for catalysis researchers is to combine the best biological and industrial approaches to nitrogen fixation so that the process proceeds at ambient temperatures and pressures. At the same time, the catalyst must reduce the chemical overpotential to such an extent that the construction of new fertilizer plants no longer requires such high capital investments. This is a major challenge as there is no combination of acids (which serve as a proton source) and reducing agents (the electron sources) available for the fixation at the thermodynamic level of hydrogen gas. This means that the mixture must be reactive enough to form N−H bonds at room temperature. In the now described pathway with molybdenum and samarium, the researchers have adopted a strategy in which the proton and electron sources are no longer used separately. This is a fundamentally new approach to catalytic ammonia synthesis. It makes use of a phenomenon known as coordination-induced bond weakening. In the proposed path, the phenomenon is based on the interaction of samarium diiodide (SmI2) and water.

Water is stable because of its strong oxygen-hydrogen bonds (O−H). However, when the oxygen atom in the water is coordinated with SmI2, it exposes its single electron pair and its O−H bonds are weakened. As a result, the resulting mixture becomes a readily available source of hydrogen atoms, protons and electrons, that is. The researchers around Yuya Ashida use this mixture with a molybdenum catalyst to fix nitrogen. SmI2-water mixtures are therefore particularly suitable for this type of catalysis. In them, a considerable coordination-induced bond weakening was previously measured, which was used inter alia for the production of carbon-hydrogen bonds.

The extension of this idea to catalytic ammonia synthesis is remarkable for two reasons. First, the molybdenum catalyst facilitates ammonia synthesis in aqueous solution. This is amazing because molybdenum complexes in water are usually degraded. Second, the use of coordination-induced bond weakening provides a new method for nitrogen fixation at ambient conditions. This also avoids the use of potentially hazardous combinations of proton and electron sources which are a fire hazard. The authors’ approach also works when ethylene glycol (HOCH2CH2OH) is used instead of water. Thus, the candidates for proton and electron sources are extended by an additional precursor.

Ashida and colleagues propose a catalytic cycle for their process in which the molybdenum catalyst initially coordinates to nitrogen and cleaves the N−N bond to form a molybdenum nitrido complex. This molybdenum nitrido complex contains the molybdenum-nitrogen triple bond. The SmI2-water mixture then delivers hydrogen atoms to this complex, eventually producing ammonia. The formation of N−H bonds with molybdenum nitrido complexes represents a significant thermodynamic challenge since the N−H bonds are also weakened by the molybdenum. Nevertheless, the disadvantages are offset by the reduction of the chemical overpotential. The SmI2 not only facilitates the transfer of hydrogen atoms, but also keeps the metal in a reduced form. This prevents undesired molybdenum oxide formation in aqueous solution.

The new process still has significant operational hurdles to overcome before it can be used on an industrial scale. For example, SmI2 is used in large quantities, which generates a lot of waste. The separation of ammonia from aqueous solutions is difficult in terms of energy consumption. However, if the process were used for energy storage in combination with our recovery method, the separation would be eliminated from the aqueous solution. Finally, there is still a chemical overpotential of about 600 kJ/mol. Future research should focus on finding alternatives to SmI2. These could be based, for example, on metals that occur more frequently than samarium and promote coordination-induced bond weakening as well. As Fritz Haber and Carl Bosch have experienced, the newly developed method will probably take some time for development before it becomes available on industrial scale.

(Photo: Wikipedia)

Posted on

Melting ice sheets in Greenland contribute 25% to sea level rise

Recently we reported the loss of snow cover in Europe. The snow is not only gone in many parts of Europe, also Greenland’s ice cover is melting. The Greenland ice sheet contributes 25% to global sea-level rise. This makes it the largest contribution of the cryosphere. The increased mass loss of Greenland ice during the 21st century is mainly due to the increased surface water runoff, of which ~93% come directly from the small ablation zone of the ice sheet (~22% of the ice surface). As the snow melts in the summer, bare glacier ice is more exposed in this ablation zone. Naked ice is darker and less porous than snow. It absorbs more than twice the solar radiation while also holding back less meltwater. Smooth ice produces a large proportion (~78%) of the total outflow of Greenland into the sea, although in summer only a small area of ​​the ice is exposed. Accurately capturing the reduced albedo and the full extent of bare ice in climate models is critical to determining Greenland’s present and future runoff contribution to sea-level rise.

The mass loss of the Greenland ice sheet has recently increased due to the accelerated melting of its surface. As this melting is critically affected by surface albedo, understanding the processes and potential feedbacks regardinng the albedo is required for accurately forecasting mass loss. The resulting radiation variability of the ablation zone caused the ice layer to melt five times faster compared with hydrological and biological processes, which also darken the ice sheet. Variations in the snow limits due to the shallower ice layer at higher altitudes have an even greater impact on melt when the climate is warmer. As a result of these fluctuations, the mapped ice surface during the summer of 2012, the record year of snowmelt, was the largest and had an area of 300,050 km2. That is, bare ice accounted for 16% of the ice surface. The smallest extent of bare ice was 184,660 km2 and was observed in 2006. This corresponded to 10% of the ice surface, i.e. almost 40% less area than in 2012. However, the observed snowpack variation was high and the observation period was too short for a solid trend assessment.

Current climate models are too inaccurate in predicting the sea level rise during flood years, leading to uncertainty in the estimation of Greenland’s contribution to global sea level rise. To understand the factors that influence melting, Jonathan Ryan of Brown University, Providence, Rhode Island, and his colleagues have investigated Greenland’s snow line. At altitudes below the snow line, the darker ice is not covered by snow. This snow line moves up or down during Greenland’s seasons. The researchers mapped these movements between 2001 and 2017 using satellite images. The average height of the snow line at the end of the summer in 2009 was between 1,330 m and then 1,650 m in 2012. The fluctuations in the snow line are the most important factor when it comes to how much solar energy the ice sheet absorbs. Modelers must consider this effect to improve their predictions. Knowing how much and how fast the Greenland ice melts will help us to take better protective measures. At Frontis Energy, we think that the best protection against sea-level rise is the prevention and recycling of CO2.

(Photo: Wikipedia)

Posted on

Economic losses caused by flooding due to global warming

In Europe, floods are linked to high fluctuations of atmospheric pressure. These variations are also known as the North Atlantic Oscillation. Stefan Zanardo and his colleagues at Risk Management Solutions, London, UK, analyzed historical records of severe floodings in Europe since 1870. They compared patterns of atmospheric pressure at the time of the floods. When the North Atlantic Oscillation is in a positive state, a depression over Iceland drives wind and storm throughout northern Europe. In a negative state, however, it makes southern Europe moister than usual. Normally, floods occur in northern Europe. They cause the most damage if the North Atlantic Oscillation was positive in winter. If enough rain has already fallen to saturate the soil, high risk conditions for flooding are met. Air pressure in Europe may change with global warming and public administrations should take this into account when assessing flood risk in a region, the researchers say.

This is important because flooding in Europe often causes loss of life, significant property damage , and business interruptions. Global warming will further worsen this situation. Risk distribution will change as well. The frequent occurrence of catastrophic flooding in recent years has sparked strong interest in this problem in both the public and private sectors. The public sector has been working to improve early warning systems. In fact, these early warning systems have economic benefits. In addition, various risk mitigating strategies have been implemented in European countries. These include flood protection, measures to increase risk awareness, and risk transfer through better dissemination of flood insurance. The fight against the root cause, global warming that is, however, is still far behind to what is needed.

Correlations between large-scale climate patterns, and in particular the North Atlantic Oscillation, and extreme events in the water cycle on the European continent have long been described in the literature. With with more severe and more often flooding as well as alarming global warming scenarios, raising concerns over future flood-related economic losses have become the focus of public attention. Although it is known that climatic patterns also control meteorological events, it is not always clear whether this link will affect the frequency and severeness fo flooding and the associated economic losses. In their study, the researchers relate the North Atlantic Oscillation to economic flood losses.

The researchers used recent data from flood databases as well as disaster models to establish this relation. The models allowed the quantification of the economic losses that ultimately caused by the North Atlantic Oscillation. These losses vary widely between the countries within the influence of the North Atlantic Oscillation. The study shows that the North Atlantic Oscillation can well predict the average losses in the long term. Based on the predictability of the North Atlantic Oscillation, the researchers argue that, in particular, the temporal variations of the flood risks caused by climate oscillations can be forecast. This can help to take encounter catastrophic flood events early on. As a result, flood damage can be minimized or even avoided. As scientists improve their predictions for the North Atlantic Oscillation, society will be better prepared for future flooding.

(Photo: Wikipedia, Stefan Penninger, Sweden)