This article outlines the importance of solar cell technology in the context of world’s increasing energy needs, as well as the need to move away from fossil fuels in order to combat climate change. These factors drive the need for better solar cell technology, in terms of efficiency as well as carbon footprint during manufacturing. In order to understand all the challenges facing solar cell development, an understanding of the fundamental design, physics and materials making up solar cells is needed. This article moves from the very basics of semiconductors all the way to multijunction solar cells, making an effort to add historical and industrial context along the way. Ultimately, with the steadily decreasing cost of silicon-based solar cells, researchers need accurate data about their new devices as quickly as possible in order to maximize their opportunities in the market. The right tests, models and tools will allow them to do that.
We all use energy in our lives, and not just for sustaining our human biology in waking, walking and wondering. To look at this website requires electrical energy. The energy to make the device you’re reading on had to come from somewhere. If you own a vehicle, you probably have to charge it or fill it with gas to get around.
There are many kinds of energy, and humanity has become pretty good at getting and converting energy from a lot of different sources. We’ve also started using more of it over time.
Data Source: https://www.iea.org/geco/data/
One of the types of energy everyone is most familiar with, electrical energy (or electricity), is actually a secondary energy source. The reason is that we have to produce it through some other means, and some of those means (primary energy sources) are in the pie chart above.
For example, we can turn turbines with flowing water, and the spinning turbine can then generate electricity. This is known as hydroelectric energy.
Nuclear energy takes advantage of Einstein’s famous equation, E=mc2, where mass (m) is converted to energy (E) during the splitting of an atom (nuclear fission). This releases huge amounts of heat that can be used to boil water into steam, which can then turn a turbine to generate electricity.
A much cleaner form of nuclear energy involves putting atoms together (nuclear fusion). Fusion is the reaction that powers our sun, and needs intense heat and pressure to happen, so it’s still under development. However, it’s seen great progress in recent years.
Coal, oil and gas are called fossil fuels, because they are produced from plants and animals that have decayed over hundreds of millions of years. As you can see from the above pie chart, we get a huge portion of our energy from burning fossil fuels. Burning fossil fuels releases energy that can produce steam, turn a turbine, and again give us electricity. It can also just produce motion, as in internal combustion engines.
Wind energy takes advantage of the Earth’s breeze to spin turbines and—you guessed it—generate electricity.
Solar energy is named after the latin word solaris, meaning “of the sun”. There are many ways of using the sun’s energy, ranging from using it to heat homes and industrial processes directly, or to heat molten salts and water for electricity. There are also ways to convert solar energy directly into electricity.
Solar photovoltaic energy takes advantage of the photoelectric effect, where an electrical current flows as a result of the sun shining on specific kinds of materials.
Think photo=light and volt=electricity. If you’ve ever seen a solar panel, that’s an example of solar photovoltaic energy, and we’ll see that it plays an important role in meeting the world’s anticipated energy needs.
There are many other ways to generate energy; this is just a sample of the main ways we get the energy for all the things we do around the world.
Humanity’s need to use energy is not going away. In fact, our energy usage keeps growing. According to the International Energy Agency, the world’s energy consumption in 2018 increased at almost twice the rate of growth since 2010, needing an additional generation of 1000 TWh (a trillion kilowatt-hours, or about how much energy it would take for everyone on Earth to run a hairdryer for four days straight).
The 2018 World Energy Outlook reports that global energy demand is expected to increase by more than 25% by 2040.
These projections are conservative in the sense that they assume humanity doesn’t drastically increase its energy usage, particularly in third-world and developing countries. However, we can and should do better than that, because we need to strive to improve the living conditions of everyone on Earth. An investigation of the data reveals that GDP, life expectancy, and infant mortality statistics all improve with increased energy use per person. Bottom line: everything gets better with more energy.
This would require a heroic effort, and amounts to a new 1 GW power plant starting and maintaining operation every day.
This energy has to come from the different sources mentioned above. Some of them are non-renewable, meaning they don’t regenerate faster than we use them. The graph below shows change in how the electricity we used was generated in 2018.
A many-pronged approach is needed to try and meet the energy demands of tomorrow. As we start to deplete our fossil fuel reserves, we can see that we’ll need much more energy from renewable sources of hydro, wind and solar photovoltaics (solar energy).
There’s another big reason we want to move away from our dependence on fossil fuels, though.
“… atmospheric CO2 has increased since the Industrial Revolution” Image source: NASA: https://climate.nasa.gov/vital-signs/carbon-dioxide/
IPCC 2018 report on p.13, showing the impacts and risks of global warming for different systems. Image Source:https://report.ipcc.ch/sr15/pdf/sr15_spm_final.pdf
The photovoltaic effect is closely related to the photoelectric effect, with a key difference. In the photoelectric effect, electrons are emitted into space. But, in the photovoltaic effect, electrons enter what we call the conduction band of the material. Since the photovoltaic effect doesn’t require breaking an electron completely free of a material, it requires less energy, and thus can occur more often.
What do we mean by the “conduction band”? Well, there are two main bands of energies that electrons can have. One range of energies, where an electron is bound around a single atom or molecule, is called the valence band. The other, where the electron is free to move from one atom or molecule to another, is called the conduction band. We’ll discuss this a bit more later, but the general idea is that when an electron is in the conduction band, we can conduct electricity, or have electrons move about more freely.
There’s a second requirement for the photovoltaic effect, though. Once an electron is in the conduction band, for the photovoltaic effect to occur, the electron has to move under the influence of an electric potential (or voltage). We’ll see later that this potential can be produced in semiconductors by putting two different, specific materials in close contact.
Electrons that move away from their parent material create a charge difference between where they are, and where they were. This charge difference generates a voltage, much like that across the terminals of a battery, and the moving electrons make up an electrical current. Current and voltage give us electrical power, or energy over time. The photovoltaic effect is the more practical way we convert solar energy into electrical energy. It’s what solar cells rely on.
The first photovoltaic cell was made at Bell labs in 1954, and it could only convert about 4% of sunlight into electricity. Today, we’re doing much better, with commercial conversion efficiencies closer to about 20%, and research efficiencies pushing much higher, as shown in the NREL graph below. They’ve also gotten a lot cheaper, too, with cost decreasing by about 10% every year. There are many growing advantages to using solar cells.
For more information on their screening procedures for researchers who make it onto the graph, visit their website: https://www.nrel.gov/pv/cell-efficiency.html
Image Source: https://www.researchgate.net/publication/311255868_’Swanson’s_Law’_plan_to_mitigate_global_climate_change
Photovoltaics are found in systems as small as cell phone battery packs, or as large as fields. They all work on the same basic principles, though.
As we mentioned, a photovoltaic cell is a semiconductor diode. That might not be a very helpful explanation if you don’t know what a semiconductor is, or what a diode is, so we’ll give you a brief overview here. If you already know, you can feel free to skip ahead to Photovoltaic cell basics.
Semiconductors make up microchips, and pretty much anything digital or computerized. To understand what they do differently from other substances, we have to look at the periodic table of elements.
Once these elements have their four bonds, the electrons are shared equally, and each atom is content to stay as it is. There are no extra electrons that are free to move around, so there’s no electrical conductivity. For these reasons, these elements (carbon, silicon and germanium) are normally really good insulators when they’re pure. Because silicon is the most common element used within solar cells, we’ll use silicon as an example for the rest of this section.
P-type doping (which stands for positive-type doping) is produced when you mix in a small amount of boron or gallium. These elements are to the left of silicon, so they only have three electrons in their outer orbital. When they join into a lattice, there’s one direction that will not have a bond, and will form a “hole” where an electron is missing. This is a place where an electron can jump to, leaving a hole somewhere else. In this way, the hole (or a missing electron) can be thought of as a positive charge that can move around the lattice, which is why this doping is called p-type.
Both the n-type and p-type doping produce materials that can conduct a small amount of electricity. They’re not as good as metal conductors, however, which is why they were given the name semiconductor, meaning partially conductive.
Now that we’ve gained a basic understanding of semiconductors, it’s time to apply this understanding to the most basic semiconductor device: the diode.
You can now understand why one of the main things diodes are used for is a one-way gate, because they’ll only conduct current in one direction, and only if the junction potential is overcome.
Now that you understand what a semiconductor diode is, we can go back to our main focus, which is the photovoltaic cell, a specific kind of semiconductor diode.
There is a minimum energy that an electron has to obtain before it breaks free of the lattice to move around and conduct electricity. This is the difference between the energy of a bound electron and a free electron.
There’s no one single value, though; there’s actually a range of energies that electrons can have when they’re bound, and this range and state is referred to as the valence band of electrons.
The range of energies and state of electrons that are free to conduct is called the conduction band. The extra energy that electrons have to gain to move from bondage to freedom is the minimum difference between the valence band and the conduction band. We call this difference the band gap.
In silicon, the band gap energy is about 1.11 electron-volts (eV) (an electron volt is the energy that one electron gains when it’s under the influence of one volt of electric potential).
In a semiconductor, the band gap energy is small enough that we can move electrons between being bound and being free, simply by shining light on the material. For comparison, insulators have a band gap energy that is too high (usually greater than 3 eV), meaning that it takes too much energy to free electrons, so under normal conditions, they aren’t conductive. In metals, the valence and conduction bands overlap, so electrons can easily change states between valence and conduction bands, providing a surplus of electrons that can conduct current.
All right, now let’s say we have light shining on our semiconductor diode of the right energy—larger than the band gap. It gets absorbed by an electron. This electron moves from the valence band into the conduction band, going from bondage to freedom. It can move around now.
The gap that it left behind is a hole. By shining light on our semiconductor, we’ve created both a conductive electron and a hole. For this reason, the process is often called electron-hole pair generation.
If we connect a wire between the top and bottom of our photovoltaic cell, this electron can now move all the way around through the wire, and reach the hole on the other side of the diode. We’ve just generated a current.
We have our photovoltaic cell: a semiconductor diode that conducts electricity when we shine light on it. We’ve converted energy from the sun into electricity.
There are several important details to understand and emphasize here. The first thing is the direction of photocurrent flow. The electric current that flows as a result of light is actually in the opposite direction of the normal diode current. Normally current (defined as the movement of positive charge) moves from the anode to the cathode in a diode. In a photovoltaic cell, however, we see that it’s moving in the opposite direction the long way around: from the cathode to the anode.
Remembering that a photovoltaic cell is just a special kind of semiconductor diode, if we want to figure out the total current flowing, we can just add (or in this case, subtract), the diode current from the photoelectric current. If we assume the light shining on a photovoltaic cell stays about the same, then the photoelectric current is just a constant, while the diode current is given by Shockley’s equation. Since the current we’re most interested in for a photovoltaic cell is the photoelectric current, we choose that direction to be positive, resulting in the following equation for current:
Note that when we don’t have any light shining on the solar cell (IL = 0), the equation is just Shockley equation. Because this is how the solar cell behaves under dark conditions, the second term in the equation is often called the dark current.
The resulting curve is an inverted and shifted Shockley diode curve that is famous in photovoltaics, called the solar cell IV characteristic curve:
Another quick note is that the way this curve is depicted depends on what is defined as the current-carrying particle. Whether you say that it’s the negative electron, or a positive hole, changes the sign and therefore whether the graph is flipped upside down or not. What’s important is that both definitions are correct ways of representing what’s occurring in a solar cell.
Another equivalent way to think about the current flow in a photovoltaic cell is that the diode’s natural current flow leeches away some of the current that would normally go to the load.
To better understand the behaviour of the photovoltaic cell, it is common to make use of electric circuit analogies in order to understand what’s physically going on. We call this a circuit model, and we can actually draw the electric circuit associated with the model described above. The photocurrent is then represented by a constant current source connected to a regular diode and our intended load.
While the model we’ve described doesn’t quite take into account everything that’s happening (as we’ll see later), this is a really good starting point since it benefits from being pretty straightforward to talk about and put measures around. For these reasons, a lot of standards and values have been assigned to describe this IV curve graph and allow solar cell devices to be compared.
The maximum current in the device will flow when there’s no resistance; that is, when we hook a simple wire between the terminals. Since this is what happens in a short-circuit, the maximum current is called the short-circuit current, or Isc. This current isn’t doing anything useful, though, and is in fact neutralizing the voltage across the photovoltaic cell.
What we have to do to make the current useful, is to make it do work, or transfer some of its energy into a load. The more load we have, the higher the voltage the cell will have, where all of its junction potential will be used to move the electrons. The highest load we can possibly have is when there’s an infinite amount of resistance, or when the wires aren’t connected at all, and the electrons would have to cross an air gap in order to make it to the other terminal. Because this maximum voltage happens when the circuit is open (i.e. disconnected), we call this the open-circuit voltage, or Voc. In this case, we don’t have a current flowing, so again, we aren’t getting much use of the solar cell. We have to use the solar cell, then, somewhere in between the two extremes of Isc and Voc.
Power in an electric circuit is calculated by multiplying the current by the voltage. Ideally we would have Isc multiplied by Voc. What happens as the voltage in the cell increases, is that the number of electrons actually escaping the depletion region starts to drop. So the curve drops quickly beyond a certain point. However, there is a steady rise of power from the photovoltaic cell as we increase voltage, and before we fall off the “cliff” of the curve. There’s a point where we’ll get the maximum power, and this configuration is called the maximum power point, or MPP.
We can see that multiplying current by voltage at the maximum power point (or anywhere else, for that matter) is the same thing as calculating the area of a rectangle on the curve. This rectangle will always be less than the maximum we’d love to achieve if there were a way to get power = Isc x Voc. Because these calculations can be thought of as area calculations, scientists define a term called the Fill Factor (written as FF), that describes how much power we are getting out of the cell compared to the dream of Isc x Voc.
In other words, the Fill Factor represents how close we are to “filling” the rectangle of Isc x Voc.
A quick note worth mentioning is that the current often depends on the area of the solar cell in question. Because researchers build different sizes of solar cells at different stages of development, in order to compare the currents more fairly, the current density is usually used, which is the current divided by the area (usually in units of mA per cm2), symbolized by the letter J. So you’ll often see this symbol in place of the current I that we’ve been talking about so far, as we see in the typical equation for Fill Factor below.
All of these metrics are shown below.
Image modified from: http://www.alternative-energy-tutorials.com/energy-articles/solar-cell-i-v-characteristic.html
This efficiency (eta above) is one of the most important measurements at the end of the day, because it determines how much electricity humanity can get from solar cells. If we can make photovoltaic cells that convert a greater portion of the sun’s energy, that means we’re that much closer to meeting the energy requirements for a bright future.
As you might have already figured out, there’s a balance to be found and some limits to what we can do, which is what scientists and engineers are trying to overcome when they design different types of solar cells.
Here are some more in-depth details about Shockley and Queisser’s calculations.
They started by calculating the number of photons of energy greater than the band gap, that are shining on a given area in a second. Because each photon has an energy E = hv, we just divide that out from Planck’s equation, then add up all the contributions of energies higher than the band gap:
Shockley paper: http://metronu.ulb.ac.be/npauly/art_2014_2015/shockley_1961.pdf
The total power we can get out, assuming all of this absorbed energy is converted into electricity, is this number of photons multiplied by the area we’re interested in, multiplied by how much energy each of those photons is carrying:
If we divide output power by input power, we get efficiency:
This equation might look intimidating, but the important thing is its shape, and where the efficiency is maximized.
The graph below gives us both those things.
Image Source: By Sbyrnes321 – Own work, Public Domain, https://commons.wikimedia.org/w/index.php?curid=13252694
We can see that the maximum efficiency is around 32-33%, and happens for band gaps between 1.1 and 1.4 eV. This is specific to our sun as a black body of 5800 K. You might remember that the band gap of silicon is 1.11 eV, which falls within this ideal region, holding a maximum theoretical efficiency of about 32%. As the first industrial solar cell material, silicon is, in fact, a really good choice.
Shockley and Queisser did further calculations to improve the realism of their model. They considered radiative recombination, which is when electrons and holes recombine and produce light. They considered black body radiation from the device itself resulting in some losses. They also talked about the importance of incident angle of sunlight, as well as the resistance across the output terminal that results in the most useful electricity compared to heat losses. Finally, they also considered that not all of the light will actually create electron-hole pairs. This is because of the charge recombination we mentioned above, but also because some materials like silicon have to vibrate a certain way in order for light to be absorbed, so only a cross-section of the light with sufficient energy will actually generate hole-pairs. Also, not all of the junction voltage will be gained by the electrons – some of it will be converted to heat as the charge carriers try to make their way out of the depletion region.
To really measure and quantify these effects for different solar cells, sophisticated physical models need to be paired with the right experimental measurements.
Advanced solar simulators, combined with an IV measuring instrument, allow researchers to get the data they need to understand how effects like the above are influencing their solar cell designs.
Research on modern solar cells focuses on many of the issues that Shockley and Queisser discussed in their seminal paper, as well as trying to find ways to “hack” the limit they calculated.
Some other practical considerations include reflection off the front surface or metal terminals. For example, the calculation they did was for a single-junction photovoltaic cell of a single material (one pair of n-type and p-type semiconductors). One way to improve things is to use multiple materials with multiple junctions, which has resulted in a lot of different so-called multi-junction solar cell designs.
We’ve already talked about a few of the goals engineers and scientists have in mind when designing solar cells, and it’s worth mentioning a few more in order to understand the direction of research and device evolution.
The first main goal of solar cell design is to increase absorption, to get more energy out of each cell. We’ve already mentioned a few of the challenges and limits around this goal. Other things that we want to maximize are charge separation and transport, which means keeping the charges apart until they’ve done what they’re intended to do, as well as maximizing the photovoltage, which, as we’ve discussed already, means that each electron will have a higher energy, and you’ll need less solar cells to do useful work.
The next design goals are closely related to the above, and arise from overcoming unwanted effects that take place in the solar cell. One of these is called surface recombination, where the electric charge carriers reach the surface of the device, and instead of traveling around the circuit where they’re intended, each electron-hole pair comes back together and recombines.
Part of the reason this occurs is that just above the surface of a device, the junction potential doesn’t really exist, so charge carriers are free to take whichever path is easiest to recombine.
A final design goal is to improve solar cell production techniques so that we can mass-produce them more cheaply and offer this source of energy to a wider portion of humanity.
Also in the 1980s, solar cells were made with silicon dioxide (SiO2) on the front surface, which served as a barrier to prevent carriers from reaching the surface and recombining prematurely.
Because this technique also allowed silicon to have a natural layer of protection from the elements, it became more commercially worthwhile to make solar cells out of the float process, which produced better quality silicon with charge carriers that could travel much farther. With this technique, silicon cells achieved 20% efficiency in 1985.
Another technique to increase absorption was to minimize the area covered by the metal contacts. The metal used to complete the circuit of a solar cell has to attach to the front and back surfaces. The use of point contacts reduces the shadowing of this metal and results in increased absorption. This technique allowed Stanford to achieve 22% efficiency in 1992.
Later, it was realized that point contacts at the rear of the solar cell actually help to prevent recombination at the back of the cell, because the silicon to silicon dioxide interface was easier to produce without defects, compared to the silicon to metal interface. This technique, along with a few other refinements, pushed silicon solar cell efficiency up to 24% in 1994.
A schematic of a solar cell employing many of these techniques is shown below.
All of the design methods and progress we’ve talked about so far have centered on silicon and a single junction. As you might imagine, there’s no law saying that we have to stick with silicon, nor do we have to stick to a single junction! We’ll talk about alternative materials in the next section (because there’s a lot of ground to cover there), but the multi-junction approach is a general design concept that is pretty easy to understand.
When we were calculating the maximum efficiency of solar cells, we said that a photon with energy greater than the band gap would move a single electron into the conduction band. Any excess energy is mostly converted into heat. That means that all of the sunlight with energy greater than the band gap is not being absorbed very efficiently.
The multi-junction solar cell tries to rectify this inefficiency by presenting multiple band gaps to the incoming sunlight. Basically, it presents a series of materials and junctions to the light, usually starting with the highest energy (shortest wavelength) light at the top of a stack, and working down to the lowest energy (longest wavelength) at the bottom.
Image Source: http://www.tindosolar.com.au/learn-more/poly-vs-mono-crystalline/
Semiconductors can be made from alloys that contain equal numbers of atoms from groups III and V of the periodic table, and these are called III-V semiconductors.
Group III elements include those in the column of boron, aluminium, gallium, and indium, all of which have three electrons in their outer shell.
Group V elements include those in the column of nitrogen, phosphorous, arsenic, and antimony, all of which have five electrons in their outer shell.
In a III-V semiconductor, atoms arrange into what’s called a zincblende crystal structure, also known as a face-centered cubic structure or cubic closest packing (CCP).
By construction, all the valence electrons in a III-V semiconductor are used up in bonding, so there aren’t any free to conduct. However, by doping in a similar manner to silicon, we introduce impurities that can then make it a semiconductor.
One of the main advantages of III-V semiconductors is that the crystal composition can be varied by replacing some group III atoms with other group III atoms. This changes the bonding and packing distances of the atoms. Why would we want to do this? The reason is that the crystal structure determines the band gap energy. III-V semiconductors, therefore, give us the ability to tune the band gap to our heart’s desire.
The methods by which III-V semiconductors are made include liquid phase epitaxy (LPE), molecular beam epitaxy (MBE), metal organic chemical vapour deposition (MOCVD), and metal organic vapour phase epitaxy (MOVPE), all of which allow for fine control of the make-up and thickness of semiconductor layers. Unfortunately, these methods are also fairly expensive.
In all of these technologies, because of their mass-produced nature, the materials usually have more defects that will prevent the charge carriers from travelling as far as we’d like. To get around or compensate for this issue, materials usually need to be really good light absorbers.
Alternatively, several junctions are used, or the electric potential of the junction is extended to help carriers along, as is done in p-i-n semiconductors which have an intrinsic (undoped) semiconductor layer in the middle.
This solar cell technology produces cells with many defects, making them difficult to dope, and ultimately setting a limit on the junction potential that can be achieved. Defects also make the films more resistive, and overall make the cell’s performance dependent on the density of carriers present. To model this complex behaviour can be very challenging, and is a topic we’ll tackle in the Data Sets & Models section.
Probably the best-developed thin-film solar cell technology is amorphous silicon, which means silicon that isn’t arranged into a perfect crystal structure. It’s been in commercial production since 1980, and has the immediate advantage of not needing special crystal vibrations in order to absorb light (since the crystal lattices are all mismatched anyway). Therefore it has a direct band gap, and absorbs more strongly than monocrystalline silicon.
Cadmium telluride (CdTe) is made from the II-VI group elements, and has a direct band gap of 1.44 eV, making it one of the best-suited materials for photovoltaic applications. It has a wurtzite crystal structure shown below.
A Russian mineralogist named Lev A. Perovski discovered a class of materials that were, some time later in 2009, discovered to be useful in solar cells. Originally they were studied for ferroelectricity and superconductivity. These materials bear his name and are known as perovskites.
They follow the general formula ABX3, where A and B are both positive ions (cations) located in different parts of an octahedral configuration (where six atoms surround a central one). Usually A is divalent (with 2 electrons in its outer shell), while B is tetravalent (with 4 electrons in its outer shell). X, meanwhile, is a negative ion (anion) that is usually either oxygen or a halogen. The A atom is usually located in the center of a cube of an octahedral crystal near 12 anions, while the B atom is at one of the actual octahedral sites with 6 anions. The diagram shows this unique 3-dimensional structure.
Image Source: https://sauletech.com/press/
What makes perovskites particularly interesting is that the band gap depends on how flat or two-dimensional the lattice is, so it gives researchers the ability to find the optimal band gap as is done with III-V semiconductors. In contrast to III-V semiconductors, however, perovskites are substantially cheaper. Because of these two combined factors, perovskite technology was selected as one of the biggest scientific breakthroughs of 2013 by the editors of both Science and Nature.
Perovskites already use some organic elements (which include carbon, hydrogen, nitrogen and oxygen), but there are other solar cells in development that fall purely on the organic side of the chemistry spectrum. The driving motivation behind organic solar cell development is that the materials would be cheap, as would the production, because mass-producing carbon chains is technology that is well developed in, for example, plastics.
These solar cells have benefited from advances in the development of LEDs based on similar technology, but they still have substantial development ahead in order to be competitive with silicon.
While there are a wide variety of organic solar cell materials, the majority rely on organic molecules with sp2 hybridization – that is, carbon double bonds. The electrons of these double bonds can move to fill in positive charge gaps, which makes the materials hole conductors. Usually they have a band gap around 2 eV, which isn’t ideal for solar absorption.
While graphene is stronger than steel and highly conductive (due to the array of double bonds), it has a few other properties that make it particularly useful for solar cells. It is both very flexible and optically transparent (absorbing 2.3% of incident light from UV to IR), making it ideal for application in thin-film solar cells.
Remember that, in order to capture the current out of the absorption region of a solar cell, we have to run wires from the top to the bottom of the cell, passing through our load on the way. These wires shadow the front surface and decrease the overall light hitting our active area.
Graphene, on the other hand, can be applied as a collector to the front surface, and will serve to transmit much more of the light without shadowing, while still capturing and conducting the charge coming out of the absorption region. Its flexibility allows it to be used in thin-film solar cells, particularly in perovskites, where the main collector used is Indium Titanium Oxide (ITO), a brittle glass that cannot be bent without breaking. Graphene thus unlocks more of the potential benefits of perovskite flexibility.
Graphene has also been used to increase photon collection efficiency (PCE) in the perovskite active material itself, with some doped graphene allowing larger perovskite grains to form on the carbon network. In this role, it has been used as a carrier transport material.
Finally, it has also been used to protect the unstable perovskite films, because graphene has better physical, chemical, and thermal stability. While graphene by itself doesn’t make a solar cell, in combination with other material properties it unlocks a lot of potential advances.
As you might have already figured out, photovoltaics is a huge and interesting field of research that, as we’ve said, will play a major role in humanity’s energy future. We also mentioned above that there’s been so much development on monocrystalline silicon solar cells that there’s a steady trend of decreasing price, known as Swanson’s law.
The cost per watt is one of the bottom line metrics in the energy industry. The economies of manufacturing silicon have come very far since the invention of the first solar cell; so far, in fact, that much of the cost is in the installation and accompanying overhead rather than the cost of the devices themselves. The cost per watt for silicon dipped below a US-dollar years ago, and will continue to go down.
Silicon’s dominance in the market can make it very challenging for new photovoltaic technologies to gain traction. Therefore, it’s crucial that researchers get the most accurate data possible, as quickly as possible.
At G2V Optics, we have the technology and expertise to meet the need for fast, accurate solar cell testing data. With our class-leading, high precision solar simulators, researchers can test their solar cells accurately and under controlled and reproducible conditions.
Additionally, our IV cards allow for a simultaneous capture and parametric scan of a solar cell, so researchers can quickly characterize their devices and generate standard curves without the need for multiple tools or pieces of software.
Finally, our devices offer an optional ability to measure spectrally resolved responsivity (SRR) – our low-resolution EQE. The SRR unit is capable of measuring the spectral responsivity or quantum efficiency at the LED wavelengths used in the instrument.
This capability can be used to study the behaviour of different absorber materials within your devices, or to measure the performance of different junctions in a multi-junction solar cell.
This additional information can be crucial for evaluating whether a device meets specifications, and where it might need additional development.
If you’re a researcher in this field, contact us to find out more about how our solar simulation products can help you capture the best data about your photovoltaic devices.