Posted on Leave a comment

High-performance biomass molecule for better Diesel fuel

In our previous blog posts we have discussed resource recovery from waste related to the wastewater treatment and showed improved and enforced regulations have a positive impact on water quality and public health. Now we show that clever catalytic processes can be used to extract valuable commodities from waste agricultural products.

Low-cost waste biomass can serves as renewable source to produce a sustainable alternative to fossil carbon resources in order to meet the need for the environmentally friendly energy. For example, the C2 and C4 ethers derived from carboxylic acids obtained from biomass are promising fuel candidates. It has been reported, that when using ethers biofuel parameters such as ignition quality and sooting have significantly improved compared to commercial petrodiesel (>86% yield sooting index reduction). Ignition quality (cetane number) was improved by more than 56%.

The scientists from National Renewable Energy Laboratory, together with their colleagues from Yale University, Argonne National Laboratory, and Oak Ridge National Laboratory are working on a joint project with the goal of co-optimization of fuels and engines. The research focuses on improving fuel economy and vehicle performance while at the same time reducing emissions through identification of blendstock derived from biomass.

In their recent article, published in the renown journal PNAS, a novel molecule, 4-butoxyheptane, has been isolated in a high-yielding catalytic process from lignocellulosic biomass. Due to its high oxygen content, this advantageous blendstock can improve the performance of diesel fuel by reducing the intrinsic sooting tendency of the fuel upon burning.

The research team has reported a “fuel-property-first” approach in order to accelerate the development process of producing suitable oxygenate diesel blendstocks.

This rational approach is based on following steps:

  1. Fuel Property Characterization – includes mapping and identification of accessible oxygenates products; predicting fuel properties of those products a priori by computationally screening
  2. Production process – development of the conversion pathway starting from biomass. Includes continuous, solvent-free synthesis process based on a metal/acid catalyst on a liter-scale production of the chosen compound
  3. Testing and analysis – with the goal to validate and compare fuel property measurements against predictions

Fuel properties of target oxygenates that have been investigated are related to the health- and safety- aspects such as flash point, biodegradation potential, and toxicity/water solubility, as well as market and environmental aspects such as ignition quality (cetane number), viscosity, lower heating value and sooting potential reduction with oxygenated blendstocks. As a result, 4-butoxyheptane, looked as the most promising molecule to blend with and improve traditional diesel. It has been shown, that the fuel property measurements largely agreed with predictive estimations, validating accuracy of the a priori approach for blendstock selection.

The mixture at 20-30% blend of 4-butoxyheptane molecule into diesel fuel has been suggested as favorable. The improvement in autoignition quality as well as significant reduction of yield sooting index from 215 to 173 (20% reduction) demonstrates that the incorporation of this molecule could improve diesel emission properties without sacrificing performance. In terms of flammability, toxicity, and storage stability, the oxygenate fuel has been evaluated to be at low-risk.

Life-cycle analysis show that this mixture could be cost-competitive and have the potential in significant greenhouse gas reductions (by 50 to 271%) in comparison to petrodiesel.

As research is a perpetual process, more of it is necessary and should include testing of the bioblendstock in an actual engine and production of the biofuel in an integrated process directly from biomass.

(Mima Varničić, 2020, photo: Pixabay)

Posted on

Global wastewater resources estimated

In our last post on water quality in China, we pointed out a study that shows how improved wastewater treatment has a positive effect on the environment and ultimately on public health. However, wastewater treatment requires sophisticated and costly infrastructure. This is not available everywhere. However, extracting resources from wastewater can offset some of the costs incurred by plant construction and operation. The question is how much of a resource is wastewater.

A recent study published in the journal Natural Resources Forum tries to answer that question. It is the first to estimate how much wastewater all cities on Earth produce each year. The amount is enormous, as the authors say. There are currently 380 billion cubic meters of wastewater per year worldwide. The authors omitted only 5% of urban areas by population.

The most important resources in wastewater are energy, nutrients like nitrogen, potassium and phosphorus, and the water itself. In municipal wastewater treatment plants they come from human excretions. In industry and agriculture they are remnants of the production process. The team calculated how much of the nutrient resources in the municipal wastewater is likely to end up in the global wastewater stream. The researchers come to a total number of 26 million tons per year. That is almost eighty times the weight of the Empire State Building in New York.

If one would recover the entire nitrogen, phosphorus and potassium load, one could theoretically cover 13% of the global fertilizer requirement. The team assumed that the wastewater volume will likely continue to increase, because the world’s population, urbanization and living standards are also increasing. They further estimate that in 2050 there will be almost 50% more wastewater than in 2015. It will be necessary to treat as much as possible and to make greater use of the nutrients in that wastewater! As we pointed out in our previous post, wastewater is more and more causing environmental and public health problems.

There is also energy in wastewater. Wastewater treatment plants industrialized countries have been using them in the form of biogas for a long time. Most wastewater treatment plants ferment sewage sludge in large anaerobic digesters and use them to produce methane. As a result, some plants are now energy self-sufficient.

The authors calculated the energy potential that lies hidden in the wastewater of all cities worldwide. In principle, the energy is sufficient to supply 500 to 600 million average consumers with electricity. The only problems are: wastewater treatment and energy technology are expensive, and therefore hardly used in non-industrialized countries. According to the scientists, this will change. Occasionally, this is already happening.

Singapore is a prominent example. Wastewater is treated there so intensively that it is fed back into the normal water network. In Jordan, the wastewater from the cities of Amman and Zerqa goes to the municipal wastewater treatment plant by gravitation. There, small turbines are installed in the canals, which have been supplying energy ever since their construction. Such projects send out a signals that resource recovery is possible and make wastewater treatment more efficient and less costly.

The Frontis technology is based on microbial electrolysis which combines many of the steps in wastewater treatment plants in one single reactor, recovering nutrients as well as energy.

(Photo: Wikipedia)

Posted on

China has improved inland surface water quality

During the last decades, China has achieved rapid development in technology and economics, however at a huge environmental cost. The deterioration of inland surface water quality is considered one of the most serious environmental threats to ecosystem and ultimately public health.

Since 2001, China made major efforts to tighten the application of environmental rules in order to stop water pollution emitted by cities, farm and industry. According to the government’s “10th National Five-Year Plan”, large investments were made for pollution control and wastewater discharge regulation systems.

Small research studies showed that with this campaign, Chinese’s lakes and rivers got cleaner. Since then water quality has improved significantly − however, other parts of country still have problems with polluted water.

Now, a team of researchers of the at the Chinese Academy of Sciences in Beijing, has published one of the most comprehensive national investigation of China’s surface water quality in the renown journal Science. The researchers investigated all regions of the country to learn how surface water responds to multiple driving forces over time and space. Their report covers the assessment of water quality by means of three parameters: dissolved oxygen level (DO), chemical oxygen demand (COD) and ammonium nitrogen (N) in inland surface waters. They performed monthly site-level measurements at major Chinese rivers and lakes across the country between 2003 and 2017.

Due to regional variations in China’s inland water quality as well as the dynamics in multiple anthropogenic pollution sources, such studies are crucially important to identify the necessary regulation measures and water quality improvement policies adapted to ecosystem sustainability at all diverse country regions.

The results show that during the past 15 years, annual mean pollution concentration has declined across the country at significant linear rates or was maintained at acceptable levels. Consequently, the annual percentage of water quality have increased by 1.77% for COD, 1.83% for N and 1.45% for DO per year. While China has not yet implemented environmental water standards, the study shows that China’s water quality is improving nonetheless.

The best news is that the notoriously high pollution levels have declined as cities and industry have worked to clean up and reduce their discharges. According to the authors, the most visible alleviation was noticed in northern China, while in the western region of the country water quality remained at their low pollution level throughout the observation period. The reason is likely that pollution is caused by human activity, of which there is less in those parts of the country.

Despite large efforts toward decreased pollution discharges, urban areas are still considered as the major pollution centers. These areas face additional pressure due to the constant migration and fast urbanization of the rural regions. Especially in northern China, with high-density human activity and exploding urbanization, achieving and maintaining a clean environment is a permanent struggle.

To further reduce pollution and improve water quality, the authors recommend that future activities focus on water management systems and the water pollution control. For both, the central government issued guidelines to control and improve water use and pollution discharge at regional and national levels for 2020 and 2030.

At Frontis Energy, we certainly support activities in China that help improving the countries water quality and public health. The Frontis technology gives its user an incentive to to clean wastewater before discharge by extracting its energy. Our patent pending solutions are based on microbial electrolysis which helps to extract energy from wastewater and apply in particular to China.

Mima Varničić, 2020

(Photo: Gil Dekel / Pixabay)

Posted on

Future challenges for wind energy

Many people believe that there is no need for improvement because wind turbines have been working for decades. Wind energy has the potential to be one of the world’s cheapest energy sources. In a recent article in the Science magazine, major challenges have been addressed to drive innovation in wind energy. Essentially three directions were identified:

  1. The better use of wind currents
  2. Structural and system dynamics of wind turbines
  3. Grid reliability of wind power

In order to make better use of wind currents, the air mass dynamics and its interactions with land and turbines must be understood. Our knowledge of wind currents in complex terrain and under different atmospheric conditions is very limited. We have to model these conditions more precisely so that the operation of large wind turbines becomes more productive and cheaper.

To gain more energy, wind turbines have grown in size. For example, when wind turbines share larger size areas with other wind turbines, the flow changes increasingly.

As the height of wind turbines increases, we need to understand the dynamics of the wind at these heights. The use of simplified physical models has allowed wind turbines to be installed and their performance to be predicted across a variety of terrain types. The next challenge is to model these different conditions so that wind turbines are optimized in order to be inexpensive and controllable, and installed in the right place.

The second essential direction is better understanding and research of the wind turbine structure and system dynamics . Today, wind turbines are the largest flexible, rotating machines in the world. The bucket lengths routinely exceed 80 meters. Their towers protrude well over 100 meters. To illustrate this, three Airbus A380s can fit in the area of ​​one wind turbine. In order to work under increasing structural loads, these systems are getting bigger and heavier which requires new materials and manufacturing processes. This is necessary due to the fact that scalability, transport, structural integrity and recycling of the used materials reach their limits.

In addition, the interface between turbine and atmospheric dynamics raises several important research questions. Many simplified assumptions on which previous wind turbines are based, no longer apply. The challenge is not only to understand the atmosphere, but also to find out which factors are decisive for the efficiency of power generation as well as for the structural security.

Our current power grid as third essential direction is not designed for the operation of large additional wind resources. Therefore, the gird will need has to be fundamentally different then as today. A high increase in variable wind and solar power is expected. In order to maintain functional, efficient and reliable network, these power generators must be predictable and controllable. Renewable electricity generators must also be able to provide not only electricity but also stabilizing grid services. The path to the future requires integrated systems research at the interfaces between atmospheric physics, wind turbine dynamics, plant control and network operation. This also includes new energy storage solutions such as power-to-gas.

Wind turbines and their electricity storage can provide important network services such as frequency control, ramp control and voltage regulation. Innovative control could use the properties of wind turbines to optimize the energy production of the system and at the same time provide these essential services. For example, modern data processing technologies can deliver large amounts of data for sensors, which can be then applied to the entire system. This can improve energy recording, which in return can significantly reduce operating costs. The path to realize these demands requires extensive research at the interfaces of atmospheric flow modeling, individual turbine dynamics and wind turbine control with the operation of larger electrical systems.

Advances in science are essential to drive innovation, cut costs and achieve smooth integration into the power grid. In addition, environmental factors must also be taken into account when expanding wind energy. In order to be successful, the expansion of wind energy use must be done responsibly in order to minimize the destruction of the landscape. Investments in science and interdisciplinary research in these areas will certainly help to find acceptable solutions for everyone involved.

Such projects include studies that characterize and understand the effects of the wind on wildlife. Scientific research, which enables innovations and the development of inexpensive technologies to investigate the effects of wild animals on wind turbines on the land and off the coast, is currently being intensively pursued. To do this, it must be understood how wind energy can be placed in such a way that the local effects are minimized and at the same time there is an economic benefit for the affected communities.

These major challenges in wind research complement each other. The characterization of the operating zone of wind turbines in the atmosphere will be of crucial importance for the development of the next generation of even larger, more economical wind turbines. Understanding both, the dynamic control of the plants and the prediction of the type of atmospheric inflow enable better control.

As an innovative company, Frontis Energy supports the transition to CO2-neutral energy generation.

Posted on

Turbocharged lithium batteries at high temperatures

One of the biggest hurdles for the electrification of road traffic is the long charging time for lithium batteries in electric vehicles. A recent research report has now shown that charging time can be reduced to 10 minutes while the battery is being heated.

A lithium battery can power a 320-kilometer trip after only 10 minutes of charging − provided that its temperature is higher than 60 °C while charging.

Lithium batteries that use lithium ions to generate electricity are slowly charged at room temperature. It takes more than three hours to charge, as opposed to three minutes to tank a car.

A critical barrier to rapid charging is the lithium plating, which normally occurs at high charging rates and drastically affects the life and safety of the batteries. Researchers at Pennsylvania State University in University Park are introducing an asymmetrical temperature modulation method that charges a lithium battery at an elevated temperature of 60 °C.

High-speed charging typically encourages lithium to coat one of the battery electrodes (lithium plating). This will block the flow of energy and eventually make the battery unusable. To prevent lithium deposits on the anodes, the researchers limited the exposure time at 60 °C to only ~10 minutes per cycle.

The researchers used industrially available materials and minimized the capacity loss at 500 cycles to 20%. A battery charged at room temperature could only be charged quickly for 60 cycles before its electrode was plated.

The asymmetrical temperature between charging and discharging opens up a new way to improve the ion transport during charging and at the same time achieve a long service life.

For many decades it was generally believed that lithium batteries should not be operated at high temperatures due to accelerated material degradation. Contrary to this conventional wisdom, the researchers introduced a rapid charging process that charges a cell at 60 °C and discharges the cell at a cool temperature. In addition, charging at 60 °C reduces the battery cooling requirement by more than 12 times.

In battery applications, the discharge profiles depend on the end user, while the charging protocol is determined by the manufacturer and can therefore be specially designed and controlled. The quick-charging process presented here opens up a new way of designing electrochemical energy systems that can achieve high performance and a long service life at the same time.

At Frontis Energy we also think that the new simple charging process is a promising method. We are looking forward to the market launch of this new rapid charging method.

(Photo: iStock)

Posted on

Bioelectrically engineered fuel produced by yeasts

Yeasts such as Saccharomyces cerevisiae are, as the name suggests, used for large scale production of beer and other alcoholic beverages. Their high salt and ethanol tolerance not only makes them useful for the production of beverages, but also suitable for the production of combustion fuels at high alcohol concentrations. Besides ethanol, long-chain fusel alcohols are of high interest for biofuel production as well. Bioethanol is already mixed with gasoline and thus improves the CO2 balance of internal combustion engines. This liquid biofuel is made from either starch or lignocellulose. The production and use of bioethanol supports local economies, reduces CO2 emissions and promotes self-sufficiency. The latter is especially important for resource-depleted landlocked countries.

In order to efficiently produce ethanol and other alcohols from lignocellulose hydrolysates, yeasts must use both glucose and pentoses such as xylose and arabinose. This is because biomass is rich in both lignocellulose and thus glucose and xylose. However, this is also the main disadvantage of using Saccharomyces cerevisiae because it does not ferment xylose. Consequently, the identification of another yeast strains capable of fermenting both these sugars could solve the problem. Highly efficient yeasts can be grown in co-cultures with other yeasts capable of lignocellulose fermentation for ethanol production. Such a yeast is, for example, Wickerhamomyces anomalous.

To further improve ethanol production, bioelectric fermentation technology supporting traditional fermentation can be used. The microbial metabolism can thus be controlled electrochemically. There are many benefits of this technology. The fermentation process becomes more selective due to the application of an electrochemical potential. This, in turn, increases the efficiency of sugar utilization. In addition, the use of additives to control the redox equilibrium and the pH is minimized. Ultimately cell growth can be stimulated, further increasing alcohol production.

Such bioelectric reactors are galvanic cells. The electrodes used in such a bioelectric reactor may act as electron acceptors (anodes) or source (cathodes). Such electrochemical changes affect the metabolism and cell regulation as well as the interactions between the yeasts used. Now, a research group from Nepal (a resource-depleted landlocked country) has used new yeast strains of Saccharomyces cerevisiae and Wickerhamomyces anomalous in a bioelectric fermenter to improve ethanol production from biomass. The results were published in the journal Frontiers in Energy Research.

For their study, the researchers chose Saccharomyces cerevisiae and Wickerhamomyces anomalus as both are good ethanol producers. The latter is to be able to convert xylose to ethanol. After the researchers applied a voltage to the bioelectrical system, ethanol production doubled. Both yeasts formed a biofilm on the electrodes, making the system ideal for use as a flow-through system because the microorganisms are not washed out.

Saccharomyces cerevisiae cells in a brightfield microscopic image of 600-fold magnification (Foto: Amanda Luraschi)

The researchers speculated that the increased ethanol production was due to the better conversion of pyruvate to ethanol − the yeast’s central metabolic mechanism. The researchers attributed this to accelerated redox reactions at the anode and cathode. The applied external voltage polarized the ions present in the cytosol, thus facilitating the electron transfer from the cathode. This and the accelerated glucose oxidation probably led to increased ethanol production.

Normally, pyruvate is converted into ethanol in fermentation yeast. External voltage input can control the kinetics of glucose metabolism in Saccharomyces cerevisiae under both aerobic and anaerobic conditions. Intracellular and transplasmembrane electron transfer systems play an important role in electron transport across the cell membrane. The electron transfer system consists of cytochromes and various redox enzymes, which confer redox activity to the membrane at certain sites.

The authors also found that an increased salt concentration improved conductivity and therefore ethanol production. The increased ethanol production from lignocellulosic biomass may have been also be due to the presence of various natural compounds that promoted yeast growth. When the cellulose acetate membrane was replaced by a Nafion™ membrane, ethanol production also increased. This was perhaps due to improved transport of xylose through the Nafion™ membrane as well as the decrease of the internal resistance. A further increase of ethanol production was observed when the bioelectrical reactor was operated with fine platinum particles coated on the platinum anode and neutral red deposited on the graphite cathode.

Several yeast cultures from left to right: Saccharomyces cerevisiae, Candida utilis, Aureobasidium pullulans, Trichosporum cutaneum, Saccharomycopsis capsularis, Saccharomycopsis lipolytica, Hanseniaspora guilliermondii, Hansenula capsulata, Saccharomyces carlsbergensis, Saccharomyces rouxii, Rhodotorula rubra, Phaffia rhodozyba, Cryptococcus laurentii, Metschnikowia pulcherrima, Rhodotorula pallida

At Frontis Energy, we think that the present study is promising. However, long-chain fusel alcohols should be considered in the future as they are less volatile and better compatible with current internal combustion engines. These can also be easily converted into the corresponding long-chain hydrocarbons.

Posted on

Framework for a global carbon budget

Over the past decade, numerous studies have shown that global warming is roughly proportional to the concentration of CO2 in our atmosphere. In this way one can estimate our remaining carbon budget. This is the total amount of man-made carbon dioxide that can still be released into the atmosphere before reaching a set global temperature limit. The nations of the world agreed on this limit in the 2015 Paris Agreement. It should not exceed 1.5°C, and in any case be well below 2.0°C. However, diverging estimates have been made for the remaining carbon budget, which has a negative impact on policy-making. Now, an international research group of renown climate experts has published a framework for the calculation of the global CO2 budget in Nature. The researchers suggest that the application of this framework should help to overcome the differences when estimating the carbon budget, which will help to reduce uncertainties in research and policy.

Since the fifth report of the Intergovernmental Panel on Climate Change (IPCC), the concept of a carbon budget has become more important as an instrument for guiding climate policy. Over the past decade, a series of studies  has clarified why the increase in the global average temperature is roughly proportional to the total amount of CO2 emissions caused by human activity since the Industrial Revolution. In the framework, the research group cites numerous published documents that provide evidence for the linearity of this correlation. This literature has allowed scientists to define the linear relationship between warming and CO2 emissions as a transient climate response to cumulative CO2 emissions (TCRE). The linearity is an appealing concept because of the complexity of the Earth’s response to our CO2 emissions. Additional processes that affect future warming have been included in recent models, among them, for example, the thawing of the Arctic permafrost. These additional processes increase the uncertainty of current climate  models. In addition, global warming is not just caused by CO2 emissions. Other greenhouse gases, such as methane, fluorinated gases or nitrous oxide, as well as aerosols and their precursors affect global temperatures. This further complicates the relationship between future CO2.

In the case of global warming caused by CO2, every tonne contributes to warming, whether that ton is emitted in future, now or in the last century. This means that global CO2 emissions must be reduced to zero, and then remain zero. This also means that the more we emit in the next years, the faster we have to reduce our emissions later. At zero emissions, warming would stabilize, but not disappear. It may also reverse. An overdraft of the carbon budget would have to be compensated by removing the CO2 later. One way of removing CO2 from the atmosphere would be a technology called direct air capture, which we reported earlier. Ultimately, this will probably be the only way left, as carbon neutral renewable energy source sources only make up 5% of our energy mix. Establishing a global carbon budget will further highlights the urgency of our clean energy transition. Unfortunately, there is a large divergence when it comes the amount of the CO2 remaining in our carbon budget. In their framework, the researchers cite numerous studies on carbon budgets to maintain our 1.5°C target. Starting 2018, these range from 0 tonnes of CO2 to 1,000 gigatons. For the 2.0°C target, our carbon budget ranges from around 700 gigatons to nearly 2,000 gigatons of remaining CO2 emissions. The aim of the researchers is to limit this uncertainty by establishing a budget framework. The central element is the equation for calculating the remaining carbon budget:

Blim = (TlimThistTnonCO2Thist) / TCRE − EEsfb

The budget of the remaining CO2 emissions (Blim) for the specific temperature limit (Tlim) is a function of five terms that represent aspects of the geophysical and human-environment systems: the historical man-made warming (Thist), the non-CO2 contribution to the future temperature increase (TnonCO2), the zero emission commitment (TZEC), the TCRE, and an adaptation for sources from possible unrepresented Earth system feedback (EEsfb).

 

Term Key choices or uncertainties Type Level of understanding
Temperature limit Tlim Choice of temperature metrics that allow global warming, the choice of pre-industrial reference and consistency with global climate targets Choice Medium to high
Historical man-made warming Thist Incomplete data and methods for estimating the man-made component; see also Tlim Choice and uncertainty Medium to high
Non-CO2 contribution to future global warming TnonCO2 The level of non-CO2 contributions coinciding with global net zero CO2 emissions; depends on policy choices, but also on the uncertainty of their implementation Choice and uncertainty Medium
Non-CO2 contribution to future global warming TnonCO2 Climate reaction to non-CO2 forcers, such as aerosols and methane Uncertainty Low to medium
Zero-emissions commitment TZEC The extent of the decadal zero emission commitment and near-zero annual carbon emissions Uncertainty Low
Transient climate response to cumulative emissions of CO2 TCRE TCRE uncertainty, linearity and cumulative CO2 emissions that affect temperature metrics of the TCRE estimate Uncertainty Low to medium
Transient climate response to cumulative emissions of CO2 TCRE Uncertainty of the TCRE linearity, value and distribution beyond peak heating which is affected by cumulative CO2 emissions reduction
Uncertainty Low
Unrepresented Earth system feedback mechanisms EEsfb Impact of permafrost thawing and duration as well as methane release from wetlands on geomodels and feedback Uncertainty Very low

In the CO2 budget, the unrepresented Earth system feedback (EEsfb) is arguably the greatest uncertainty. These feedback processes are typically associated with the thawing of permafrost and the associated long-term release of CO2 and CH4. However, other sources of feedback have been identified as well. This include, for example, the variations of CO2 uptake by the vegetation and the associated nitrogen availability. Further feedback processes involve changes in surface albedo, cloud cover, or fire conditions.

It remains a challenge to adequately characterize the uncertainties surrounding the estimates of our carbon budget. In some cases, the reason of these uncertainties is inaccurate knowledge of the underlying processes or inaccurate measurements. In other cases the terminology is used inconsistently. For better comparability and flexibility, the researchers propose to routinely measure global surface air temperature values. This method gives robust data for models and model runs over selected time periods. More detailed comparisons between published estimates of the carbon budget are currently difficult because the original data used for publication often are missing. The researchers therefore propose to provide these in the future along with publications.

Breaking down the carbon budget into its individual factors makes it possible to identify a number of promising pathways for future research. One area of ​​research that might advance this field is to look more closely at the TCRE. Future research is expected to narrow down the range of TCRE uncertainties. Another promising area of ​​research is the study of the correlation between individual factors and their associated uncertainties, for example, between uncertainties in Thist and TnonCO2. This could be achieved by developing methods that allow a more reliable estimate of historical human-induced warming. It is also clear that less complex climate models are useful to further reduce the uncertainties of climate models, and hence the carbon budget. Currently, each factor of the framework presented by yhr researchers has its own uncertainties, and there is no method to formally combine them.

At Frontis Energy, too, we think that progress in these areas would improve our understanding of the estimates of our carbon budget. A systematic understanding of the carbon budget and is crucial for effectively addressing global warming challenges.

Posted on

Cheap, high-octane biofuel discovered

Researchers from the National Renewable Energy Laboratory (NREL) have developed a cheap method for producing high-octane gasoline from methanol. They recently published their method in the journal Nature Catalysis. Methanol can be synthesized from CO2 via various routes, as we reported last year. Biomass, such as wood, is one possibility.

The production of biofuels from wood, however, is too expensive to compete with fossil fuels. To find a solution to this problem, the researchers combined their basic research with an economic analysis. The researchers initially aimed at the most expensive part of the process. Thereafter, the researchers found methods to reduce these costs with methanol as an intermediate.

So far, the cost of converting methanol to gasoline or diesel was about $1 per gallon. The researchers have now reached a price of about $0.70 per gallon.

The catalytic conversion of methanol into gasoline is an important research area in the field of CO2 recovery. The traditional method is based on multi-stage processes and high temperatures. It is expensive, producing low quality fuel in small quantities. Thus, it is not competitive with petroleum-based fuels.

Hydrogen deficiency was the initially problem the researcher had to overcome. Hydrogen is the key energy containing element in hydrocarbons. The researchers hypothesized that using the transition metal copper would solve this problem, which it did. They estimated that the copper-infused catalyst resulted in 38% more yield at lower cost.

By facilitating the reintegration of C4 byproducts during the homologation of dimethyl ether, the copper zeolite catalyst enabled this 38% increase in product yield and a 35% reduction in conversion cost compared to conventional zeolite catalysts. Alternatively, C4 by-products were passed to a synthetic kerosene meeting five specifications for a typical jet fuel. Then, the fuel synthesis costs increased slightly. Even though the cost savings are minimal, the resulting product has a higher value.

Apart from the costs, the new process offers users further competitive advantages. For example, companies can compete with ethanol producers for credits for renewable fuels (if the carbon used comes from biogas or household waste). The process is also compatible with existing methanol plants that use natural gas or solid waste to produce syngas.

Posted on

Semiconductive nanotubes with photovoltaic effect

Cost-effective and efficient methods for converting sunlight into electricity are the focus of green energy research. Solar cells developed for this purpose are currently made of semiconductors such as silicon. Electrical energy is generated at the junction between two different semiconductors. However, the efficiency of these solar cells has almost reached its theoretical limit. New methods of converting sunlight into electricity must be found if solar energy is to be used as a major source of electricity. An international research team from Germany, Japan and Israel has now made important progress in this direction. Zhang and colleagues recently published their findings in the prestigious journal Nature. They demonstrate a transition-free solar cell that can be made by applying a more atomic semiconductor layer into a nanotube.

In a conventional solar cell, two regions of a semiconductor are doped with different chemical elements. The electrical current is generated by the negatively charged electrons of a region and by the positively charged electron holes (holes). At the junction between these two areas, an electric field is created. When sunlight is absorbed at this junction, electron-hole pairs are formed. The electrons and holes are then separated by the resulting electric field, generating an electric current. This conversion of solar energy into electricity is called photovoltaic effect. This photovoltaic effect is particularly important for green energy production. Its efficiency has almost reached the theoretical limit as mentioned above.

In technical terms, the photovoltaic effect occurs at traditional pn junctions, where a p-type material (with an excess of holes) adjoins an n-type material (with an excess of electrons). Current is generated in the photo-induced generation of electron-hole pairs and their subsequent separation. Further advances are expected through the use of other photovoltaic effects that do not require transition and only occur in crystals with broken inversion symmetry. However, the practical implementation of these effects is impeded by the low efficiency of the materials. Semiconductors with reduced dimensionality or smaller band gap have shown to be more efficient. Transition metal dichalcogenides (TMDs) are, for example, two-dimensional small-bandgap semiconductors in which various effects were observed by breaking the inversion symmetry in their bulk crystals.

The reported bulk photovoltaic effect (BPVE) is based on tungsten disulfide, a member of the TMD family. Crystals of this material have a layered structure and can be stratified in layers similar to graphite. The resulting atomic sheets can then be rolled into tubes of 100 nanometers by chemical methods. The authors produced photovoltaic devices from three types of tungsten disulfide: a monolayer, a bilayer and a nanotube.

A systematic reduction in crystal symmetry has been achieved beyond mere fractional symmetry inversion. The transition from a two-dimensional monolayer to a nanotube with polar properties has been significantly improved. The photovoltaic current density produced is orders of magnitude greater than that of other comparable materials. The results not only confirm the potential of TMD-based nanomaterials, but also the importance of reducing crystal symmetry for improving the BPVE.

While the nanotube devices had a large BPVE, the single-layer and two-layer devices produced only a negligible electric current under illumination. The researchers attribute the different performance characteristics of the solar cells to their pronounced crystal symmetry. This way, one can spontaneously generate a current in uniform semiconductors, without a transition.

The BPVE was first observed in 1956 at Bell Labs, New Jersey, just two years after the invention of modern silicon solar cells. The effect is limited to non-centrosymmetric materials characterized by a lack of symmetry in spatial inversion. That is, the combination of a 180° rotation and a reflection. The effect has two attractive properties: the current generated by light depends on the polarization of the incident light and the associated voltage is greater than the band gap of the material. This is the energy required to excite conducting free electrons. However, the effect typically has a low conversion efficiency and was therefore of rather academic than industrial interest.

To achieve high efficiency, a photovoltaic material must have high light absorption and low internal symmetry. However, these two properties usually do not exist simultaneously in a given material. Semiconductors that absorb most of the incident sunlight generally have high symmetry. This reduces or even prevents the effect. Low-symmetry materials, such as perovskite oxides, absorb little sunlight due to their large band gap. To circumvent this problem, efforts have been made to improve light absorption in low-symmetry materials, for example by using the mentioned doping. Meanwhile, it has been shown that the effect can occur in semiconductors by using mechanical fields to adjust the crystal symmetry of the material.

The newly discovered solution is encouraging with regard to the production of high absorption semiconducting nanotubes. In the case of tungsten disulfide, the crystal symmetry of the nanotubes is reduced compared to the mono- and bilayers due to the curved walls of the tube. The combination of excellent light absorption and low crystal symmetry means that the nanotubes have a significant photovoltaic effect. The current density exceeds that of materials which are inherently low in symmetry. Nevertheless, the conversion efficiency achieved is still much lower than that of the photovoltaic effect in conventional junction-based solar cells.

The authors’ findings demonstrate the great potential of nanotubes in solar energy production and raise various technological and scientific challenges. From an application’s perspective, it would be useful to produce a solar cells that consists of a large arrays of semiconductor nanotubes to check whether the approach is scalable. The direction of the generated current would be largely determined by the internal symmetry of the material. Therefore, uniform symmetry across the nanotube array would be required to create a collective current. These currents could cancel each other out.

At Frontis Energy, we wonder if the method described could work with the classic photovoltaic effect in the same solar cell. That would possibly increase overall efficiency. The two effects could use the solar energy consecutively. Despite the remaining challenges, the presented work offers a possibility for the development of highly efficient solar cells.

(Photo: Wikipedia)

Posted on

Machine learning makes smarter batteries

Renewable energies, such as wind and solar energy are naturally intermittent. To balance their demand and supply, batteries of, for example, electric vehicles can be charged and act as an energy buffer for the power grid. Cars spend most of their time idle and could, at the same time, feed their electricity back into the grid. While this is still a dream of the future, commercialization of electric and hybrid vehicles is already creating a growing demand for long-lasting batteries, both for driving as well as grid buffering. Consequently, methods for evaluating the state of the battery will become increasingly important.

The long duration of battery health tests is a problem, hindering the rapid development of new batteries. Better battery life forcasting methods are therefore urgently needed but are extremely difficult to develop. Now, Severson and her colleagues report in the journal Nature Energy that machine learning can help to predict computer battery life by creating computer models. The published algorithms use data from early-stage charge and discharge cycles.

Normally, a figure of merit describes the health of a battery. It quantifies the ability of the battery to store energy relative to its original state. The health status is 100% when the battery is new and decreases with time. This is similar to the state of charge of a battery. Estimating the state of charge of a battery is, in turn, important to ensure safe and correct use. However, there is no consensus in the industry and science as to what exactly a battery’s health status is or how it should be determined.

The state of health of a battery reflects two signs of aging: progressive capacity decline and impedance increase (another measure of electrical resistance). Estimates of the state of charge of a battery must therefore take into account both the drop in capacity and the increase in impedance.

Lithium ion batteries, however, are complex systems in which both capacity fade and impedance increase are caused by multiple interacting processes. Most of these processes cannot be studied independently since they often occur in simultaneously. The state of health can therefore not be determined from a single direct measurement. Conventional health assessment methods include examining the interactions between the electrodes of a battery. Since such methods often intervene directly in the system “battery”, they make the battery useless, which is hardly desired.

A battery’s health status can also be determined in less invasive ways, for example using adaptive models and experimental techniques. Adaptive models learn from recorded battery performance data and adjust themselves. They are useful if system-specific battery information are not available. Such models are suitable for the diagnosis of aging processes. The main problem, however, is that they must be trained with experimental data before they can be used to determine the current capacity of a battery.

Experimental techniques are used to evaluate certain physical processes and failure mechanisms. This allows the rate of future capacity loss to be estimated. Unfortunately, these methods can not detect any intermittent errors. Alternative techniques use the rate of voltage or capacitance change (rather than raw voltage and current data). In order to accelerate the development of battery technology, further methods need to be found which can accurately predict the life of the batteries.

Severson and her colleagues have created a comprehensive data set that includes the performance data of 124 commercial lithium-ion batteries during their charge and discharge cycles. The authors used a variety of rapid charging conditions with identical discharge conditions. This method caused a change of the battery lives. The data covered a wide range of 150 to 2,300 cycles.

The researchers then used machine learning algorithms to analyze the data, creating models that can reliably predict battery life. After the first 100 cycles of each experimentally characterized battery their model already showed clear signs of a capacity fade. The best model could predict the lifetime of about 91% data sets studied in the study. Using the first five cycles, batteries could be classified into categories with short (<550 cycles) or long lifetimes.

The researchers’ work shows that data-driven modeling using machine learning allows forecasting the state of health of lithium-ion batteries. The models can identify aging processes that do not otherwise apparent in capacity data during early cycles. Accordingly, the new approach complements the previous predictive models. But at Frontis Energy, we also see the ability to combine generated data with models that predict the behavior of other complex dynamic systems.

(Photo: Wikipedia)