The Race to Zero

This report was prepared from the FujiFilm 2022 Summit. It reviews the astounding data growth in the archive storage space, some frank advice from Silicon Valley’s storage gurus to avoid eminent vertical market failure from outstanding growth in archival data, and innovations driving the future race to zero $/Gb, zero waste, zero carbon footprint storage.

The Zettabyte Era

By 2025, roughly 175 ZBs are projected to be created, 11.7ZBs will be stored, equivalent to 66 years of the Hadron Collider’s experimental data, 125 million years of 1-hour TV shows, 55.3 million LTO-9 cartridges, or 50 million 20TB Hard Disk Drives. If we project the annual 30% CAGR, we’ll enter the Yottabyte Era by 2043.

Archive data consists of 9.0ZBs of the total data stored, roughly 80%, using figures from the IDC. Traditionally, archival data has been an umbrella term for boring stuff: medical documents, corporate compliance documents, emails, oldies movies.

The Mysterious Archive Button in Microsoft Outlook - MailStore

Look to the future: archive will refer to a spectrum of data, from active archive to dark archives, used to store everything from media in the film industry to AI/ML/IOT training data, accessed for a few weeks before moving back down to the lower archive tiers. The divisions between the tiers on the archive spectrum will vary based on the access frequency, with lower tiers growing larger as data ages.

This image has an empty alt attribute; its file name is ykaeJ_ZIWjjCok1z6EGgq_clY4TrEP0dsFhUbvZPFwR4-eCdbwPTl_xVmwdkY3XZoqhu4ymPL0Jie6q5xZiN5Bga_5CpryU3NWsL9FPcTe3H3FVfwSrfYMXbQw1xto4BZ4g-6t86g6vLRNxsuajGSGk

This image has an empty alt attribute; its file name is kNT0vpQ7NIGtrl0urion5d5eQnbrFGRg1OqoHeHhLNmxpC8Oy7BsFs5lhVcI1VQALtpVKbn3e13a9n943AdXccst8-k06NvppTtsP4s63M7_N2tdZxrl-AkJ3-Hw7hPS3j_oueL8UylYq_B1uf07Qrg

Moving data between one layer and another is entirely dependent on the type of tightly coupled or loosely coupled tiered storage architecture. The beginning emergence of the storage tier is the golden copy of the data, whereas the lower tiers (3) are the master copy. 10% of the world’s data is stored in this golden copy tier, the highest performance tier whereas 80% of the data is low activity archival data in the lowest tiers. The greatest challenge is centered in the primary and secondary stages, where data moves from hot to cold. This region is dynamic: neither suitable for performance critical data nor long term data retention. By 2025, this model will shape as the active archive tier, data which is used for three to four weeks for high access and then moved back down to the deep archive.

Storage Medias: SSD, HDD, and Tape: What Stores Old Facebook Posts?

There are two media that serve the archive tiers: tape and HDD. Let’s discuss what their differences are. Note, SSDs are rarely used for archival storage. While SSDs are the highest performance storage media, with data access times between 25 and 100 microseconds, each time a read/write operation is performed (that is, encoding a one or a zero to a transistor) the transistor is damaged. As a result, SSDs are limited by the number of R/W operations which make them unsuitable for a long-term, archival storage solution. They are almost 10x more expensive that HDDs on a $/Gb metric. Tape and HDD serve the archive layer and the majority of hyperscale archival data (80%) is stored on HDDs.

Courtesy of Fred Moore: FujiFilm 2022 Summit


HDDs are a storage media where ones and zeros are written to either 3.5″ or 2.5″ diameter magnetic disks spinning from 7,200RPM to 15,000RPM. As the disk spins, a read/write head flips the magnetic polarity of grains on the disk, where the direction of the magnetic field vector determines whether a one or a zero was written. At a datacenter, a 3.5″ HDD is mounted to a system for online access, such as a NAS (network attached server), JBOD (just a bunch of disks), or a server. There are only three original HDD manufacturers in the world: Toshiba, Western Digital, and Seagate, who are in a never ending battle to drive to the lowest $/Gb, highest areal density, and best access time–all pressured from supplier partnerships and a complex supply chain impacted by global politics (well, yeah, this is all global supply chains, is it not?).


Tape, on the other hand, is a predated storage media first developed in the 1960s but standardized in the 1990s by IBM under the LTO form-factor. The tape market is what business folk call a “consolidated market” because there are only two form-factors of tape media, both under the same jurisdiction: IBM. IBM’s parenting style is similar to a tiger mom in the storage industry. IBM manages their own tape form-factor, is the sole manufacturer of tape drives for both LTO and IBM form-factors, and insists on releasing their IBM cartridges at least two years prior to LTO’s. .

Tape drives are a system, and used to read and write to tape cartridges, and stacked together to form a tape library which can store up to half an exabyte. The benefit of using tape, especially for archive, is it’s ability to be disconnected from the network. Simply take out the cartridge, throw it in a box, and there is a physical “air-gap” between that data and the network. The historical approach to tape has been a use case as backup, hierarchical storage managers (HSMs) and media asset management. At some hyperscale operations (S3, Microsoft Azure, Alibaba), tape is used as the primary archive storage media, you can tell because access times can range from 1-12 hours due to the physical tape cartridge being stored in some box off-site in a separate location.

Total Cost of Operations (TCO)

A key metric for deciding between Tape or HDDs for a cloud operation is dependent not only on the access time, average storage capacity per unit, and the offline storage capability, but the Total Cost of Operations (TCO). TCO refers to all the cost associated with media utilization, from raw production to the end of life.

Consider a full-height LTO tape drive, the system used to read and record to a tape cartridge. The average energy usage is 0.031 kWh, with an average life cycle of 6.85 years. Offsetting costs of production, distribution and operational energy, a single LTO tape drive will produce 1.4 metric tons of CO2 per year. For storing 28 PB of archival data for 10 years in a tape based storage solution, 78.1 metric tons of CO2 will be produced using 14 LTO-9 Tape drives and 1500 LTO-9 cartridges using only one frame. That equivalent amount stored on HDDs in a JBOD would produce 1954.3 metric tons of CO2 over 10 years, using 18TB HDDs during the first five-year cycle and 36TB HDDs on the second cycle. Those figures then indicate 10 times more yearly energy consumed using an HDD based system over a tape based system (Brume, IBM)

Right now, you can purchase an 18TB native capacity LTO-9 tape cartridge for $148.95 with 1,035m (more than a kilometer) of magnetic tape inside. HDDs on the other hand, are a higher cost per unit ($529.99 for a WD 20TB SATA Gold) but the areal density (the number of bits per square inch of media) is 3-orders of magnitude higher than tape. Next-gen HDDs suited for archive will approach a whopping 36TB, and rumors have spread of the release of a lower performance 50TB HDD entering the market from Western Digital. These new HDDs will likely be used for the future of some of the archive storage market–specifically that first tier of the archive tier: active–but cannot be used for all archive data, especially as it gets older and colder.

Source: Information Storage Industry Consortium Area Density Chart from Carl Che – WD

It beckons the question: between HDD and tape, what are the best utilities? Where does energy efficiency become the highest concern? And what about data accessed once every 30 or 50 years in the dark archive?

The Deep Dark Archives

Recall that 80% of hyperscale’s archival data is stored on HDDs. Is this truly the best solution? I’m just an intern, but I say no. Here’s why:

If we assume the tiered archive model as valid where the probability of access, the access frequency, and the average value of the data determine the tiers, then the Deep Dark Archives should not be stored on tape nor HDDs. The data stored in the dark archives is data which has almost no value. Our conditions, therefore are: (1) near zero $/Gb (2) near zero carbon footprint (3) near zero product waste. Storage at a hyperscale datacenter accounts for 19% of total datacenter power, and moving cold data from HDDs to tape can dramatically reduce the ten-year CO2e while simultaneously reducing e-waste. There are also substantial TCO savings for migrating cold data to tape, and companies of all sizes are looking to improve sustainability for customer-facing storage.

SPACECAL: CAL Optimized for In-Space Manufacturing

This article is from my presentation I prepared for the 2021 VAM (Volumetric Additive Manufacturing) Workshop hosted by Dr. Maxim Shusteff and colleagues, which brought together the global VAM community to share exciting research in the field of volumetric 3D printing.


Additional Comments

  • Layer-based lithography systems may be incompatible in microgravity
  • We’ve demonstrated overprinting in lab, which could increase spare part diversity
  • The absence of relative motion between the photo-crosslinked object and the surrounding material enables bio-printing in extremely low-storage modulus hydrogels
  • Print times are much faster than layer-by-layer extrusion techniques
  • The print process occurs in a sealed container, which can be transported to the ISS, used, and returned to Earth with minimal human interaction
  • Minimal downtime for cleaning and maintenance as there is no extrusion system
  • Energy availability is a formidable challenge for in-space manufacturing, which means that mechanical motion must be limited to address the challenges of energy shortages.
  • NScrypt/Techshot plans to reduce the organ donor shortage (there are about 113,000 people on transplant waiting lists) creating patient-specific replacement tissues or patches.

SPACECAL Research Project Report

Design for Manufacturing, Taylor Labs, University of California, Berkeley

Authored by Tristan W Schwab, Undergraduate of Mechanical Engineering April 30, 2021

The scope of this report is to discuss the progress of the SpaceCAL Project aimed towards the Zero-Gravity flight test in the Spring of 2022 and my contributions to the project and Design for Manufacturing Group during the Spring semester of 2021.

Figure 1

NASA Tech Flight Proposal

The purpose of SpaceCAL is to develop a compact enclosure containing 5 parallel computed axial lithography (CAL) printers to be tested in suborbital testing with the Zero-Gravity Flight Demonstration. The system is projected to fly in the Spring of 2022 and complete several varying viscosity resin prints.

SpaceCAL is inherently a technology demonstration of the current CAL technology developed at the University of California, Berkeley in microgravity; however, alternative scopes of research may apply. Suggestions have been raised to the enhancement of research in additive manufacturing for in space manufacturing, including: 1) Part strength versus fluid velocity through shadowgraph analysis in resin vials. 2) In situ automated post-processing of CAL prints. 3) Testing of a temperature control system in a space environment for low-gravity resin printing. 4) Tracking low-viscosity polymerization in low-gravity. 5) Performance of CAL for micro-gravity bioprinting.

The SpaceCAL project can ultimately demonstrate not only the unique abilities of CAL AM technology but shows potential to provide invaluable research in the growing field of in-space manufacturing.

As mentioned, the SpaceCAL project is scheduled to fly in the Spring of 2022 (next year). During the launch of the project in January, primary focus has been development of an enclosure to contain the CAL system, electronics, optics, and hardware considerations, and the design of a compact and exchangeable vial stack (fig. 1, right). The planned design has evolved since the initial Flight Tech Proposal. First, original documentation discussed three sets of vial stacks, containing 22 hydrogels, 11 high, medium and low viscosity resins. The current design now contains 5 vial stacks of 5 vials each. Change of the number of vial stacks and vials affects the number of resins which will be used in flight. Second, the project will no longer consider the use of “Schlieren imaging to record video data of the refractive index history”, but rather, Shadowgraph methods due to the high sensitivity of Schlieren imaging.

Current State of SpaceCAL and Contributions

SpaceCAL exists in an early design stage on Solidworks. Full purchase orders for the primary assembly include hardwire, including but not limited to 8020 beams, linear guide rails, projectors, optics, and electronics.

The author was established as the Mechanical sub team lead, tasked with rebuilding the first iteration of the SpaceCAL system on Autodesk Fusion 360. The author rebuilt an 8020 enclosure (Frame Assembly) and performed finite element analysis on Fusion 360 to analyze g-loads on the frame specified by Zero-Gravity.

The author used these analyses to confirm the material selection of the frame assembly and add triangulated supports the frame.

Progress then shifted to rebuilding the SpaceCAL system on Fusion 360 (due to the limited capabilities of Fusion 360 and parametric modeling). The author collaborated with Grad Student Lead, Joe Toombs, to develop a parametric sketch in Solidworks and develop the second version frame assembly. Current focus has now turned to the development of optic setups for particle tracing, continuing summer research positions in the Design for Manufacturing group, assembling the SpaceCAL system, and collaborating with fellow future Cal Grad Student, Taylor Waddel, on mechatronics and software. Future project objectives are expected to shift towards resin formulation and characterization.

The SpaceCAL project certainly arrives at a remarkable time for the College of Engineering at Berkeley to be involved in the growing excitement for exploration and industry in space.

Visualizing Stress Distribution in 3D Printed Lattices

The first portion of this article showcases my final project in PDF format. My first prototype is shown below the PDF.

Click here to view the final prototype video.


First and Second Prototype

Project Description:

For the initial prototype of this project, I demonstrate the unique complacency of lattice structures designed and optimized through NTopology and manufactured on an Ender3 Pro and Formlabs3 using elastic materials. I showcase my design process, my thought process in building lattices in NTopology, and my process to build an interface to visualize force distribution through a lattice.

Click here to see the prototype demonstration. 

Designing a Lattice:

There are multiple platforms for designing lattices. I selected NTopology, a software used in the industry and readily available on a student license. NTopology is unique for its interface with AM and easy “block” UI, which proved to be very efficient when learning about different lattice structures and adjusting parameters.

Figure 1: Fluorite and Body Centered Cubic Lattice Structures generated in NTopology after importing a CAD Body

Printing Process:

The first CAD bodies that I designed had small voids which I envisioned could house the force sensitive resistors. This idea would have likely worked, but printing on my Ender3 in TPU, an elastic filament, proved that printing any of the lattices with the support structures for the voids was not easily scalable, and unnecessarily overcomplicated the design. Ultimately, those prints did not turn out to be the best, and I decided that generating a simple rectangular prism lattice without voids would be the best solution. 

Unfortunately, the files I sent to the Jacobs Center to print on their FormLabs SLA printer were unusable. I wish I had given more thought to the initial design so that I could use an SLA print for a comparable demonstration, but I submitted my initial design prematurely. This print was made in elastic 50A resin, which may have been a bit too elastic for the purposes of this project.

Figure 2: Example of TPU failed lattice print with voids
Figure 3: Failed SLA print.

It turns out that the best print is the simplest design. This cannot be more true when it comes to printing elastic lattices, which fundamentally behave like springs. I tried a variety of lattice designs, Weire-Phelan, Kelvin Cell, Isotruss, Fluorite, etc… all of these lattices are nearly impossible to manufacture without support structures. I prioritized visualizing the cells themselves, so printing at a low density was a heavy consideration. The best lattice for my purposes was the Body Centered Cubic Design, which does not present overhangs greater than 45 degrees which would necessitate printing supports. 

Circuit Design

To begin my circuit design, I set up one FSR embedded in a lattice which I had on hand. Once I generated the right print, I tried embedding two FSRs accompanied with LEDs. I was having some trouble with getting the LEDs and FSRs to stay connected with the jumper wires, so I decided to solder end tips to all of them and plug them into the female-male jumper wires and wrap them in electrical tape for final presentation. 

Figure 3: Front and back faces of body-centered cubic lattice embedded with LEDs and FSRs.

Final Setup

When I was initially applying a force to the front face of the lattice, it was actually expanding outward as a result of the Poisson’s Effect. It turns out that this effect was so great that the corner FSRs were not picking up a force, since the upper face of the slots I cut out from the lattice were lifting upward. To counteract this, I built a foam board frame to contain the lattice. This made the entire setup a bit more portable and pleasing to look at, though it took away the side view of the lattice and made it more difficult to see the bottom face force response. 

I’ll be honest, this setup looks like a jumble of wires and trying to make the wires clean and orderly was an extremely difficult aspect of this project. After a few tries, I decided to separate the wiring for the left and right sides of the lattice, and this effectively cut my odds of miswiring in half. I also color coded the jumper wires as practice to improve visibility for myself and viewers.

Figure 6: Side view of foam board slots for FSRs and LEDs.
Figure 7: Split wiring from left and right sides.
Figure 8: Arduino board using all analog ports. Voltage source from battery pack.

Figure 4: Of course, the code.
Figure 9: Side view of foam board slots for FSRs and LEDs

Introduction to Photocurable Resin

Nancy Zhang first joined Carbon3D as a staff research scientist. While her first years were primarily focused on tailoring resins for a project with Adidas, she has maintained a versatile role throughout her career. After the successful launch of the Adidas midsoles, Nancy moved to a managerial position in R&D where she focuses on elastomer development and formulation. Her sharp enthusiasm and remarkable knowledge base really shows that I was talking to the right person to learn about resins. She is the type of person who knows what makes a resin, how to achieve ultimate elasticity, strength, biocompatibility, “printability” and ensure that the resin is photocurable, or potentially, recyclable. As one can imagine, that’s a lot of demands for one material. As lead of Material Characterization, it’s no wonder she describes her work as “brain gymnastics”.

Prioritizing the mechanical properties and printability of a material resin is the tip of the iceberg in material characterization. Additive manufacturing provides a platform for versatile product manufacturing which implies material versatility along with it.

Comments from the Author: I first envisioned this article as an interview which I held with Nancy Zhang, an R&D Manager of Material Characterization, several months ago. Nancy helped guide a significant portion of my ongoing additive manufacturing research and while I don’t plan to delve deep into the chemistry, I will include enough to discuss how to make photocurable resins for layer by layer additive techniques. Something I’ve taken particular notice to is that there seem to be few papers that make general formulation assessments tied with the mechanical behavior of photopolymer resins in lithography and it could tie means to the need for a thorough review paper.

“Let’s say I’m selecting a material for a car product. I only need to look at the car and the function of the component. I don’t have to think about the car, where it drives, everything outside the car, and being thrown around in the trunk.”

Layer-by-Layer Lithography Printers

The premise of the SLA/DLP process is to solidify photocurable resin using a light source and sequentially lift the solidified part out of a resin reservoir (vat). SLA (stereolithography apparatus) processes can be top-down systems, where a scanning laser is above the resin reservoir tracing the layer of solidified material. Similarly, DLP (digital light processing) processes are typically bottom-up, in which the light source (a projector) is emitting light into a shallow reservoir from below. When a layer is cured, the component is lifted out of the vat, and another layer is cured.

SLA/DLP processes are similar to FDM (fused deposition modeling) methods only by the object of solidifying material layer by layer. Their similarities stop there. Once a part is completed on an SLA/DLP process, there is typically a second step for post processing. Post-processing is used to clean up any uncured material from the surface of the part. When a part is cured, the excess material can be washed away using acetone or other solvent and placed in a UV or thermal oven to cure any remaining material without introducing new resin.

Lithography is another type of additive manufacturing, and differs from SLA/DLP and FDM methods, as it does not solidify material layer by layer; rather, lithography solidifies a part volumetrically. Lithography printers have demonstrated 30-second print times, micro-scale resolution, and heightened material compatibility for printing with softer materials such as polymeric resins, acrylates, and urethanes to generate biocompatible materials. (1)

Many forms of lithographic printers (see image) have a selective viscosity range for rapid printing. The liquid resin should be able to move quickly throughout the vat. That is, after each layer is cured and raised to the subsequent layer, the surrounding resin should fill the void created from the previous layer. This becomes a problem when printing with higher viscosity resins, as it designates slower print speeds to allow the resin to flow. Some estimates in literature designate a resin viscosity below 5 Pa/s for rapid printing. (1,2) This dependence on resin viscosity for layer-by-layer printing suggest the development of printing techniques which are independent of resin flow. (3)*

Within research into additive manufacturing processes, the majority of research has investigated the mechanical properties of AM parts by considering different process parameters such as post-curing time, layer thickness, and orientation. Most of these studies have investigated the material properties of additively manufactured parts by experimentation, but do not provide an accurate prediction of the mechanical properties of the resin before cure. This information is important, since there are a number of parameters that must be taken in to consideration during resin formulation that affect the over material character. The amount of photoinitiator, oxygen presence, apparent viscosity, the curing dose–these are just a few variables that can be adjusted in formulation that significantly impact the final cured part. (4)

*This is the idea behind computed axial lithography (CAL), which I intend to write about in subsequent articles.

“Imagine Printing a Spring”

The viscosity of a photocurable resin has shown to be positively correlated to the elasticity of the post-cured material. High viscosity resins yield parts with higher “green” strength, the strength measured after a stage 1 cure. When a resin is developed, formulators may plan for lower viscosity or increased green strength, though the latter involves an extended print time, which can inhibit the speed of production.

Printing elastomers is more difficult than printing rigid polymers because of the positive correlation between viscosity and green strength. Elastic resins are characteristically less viscous than high strength polymers, which leads to “sticking” at the print interface from surface tension.

“With [additive manufacturing], the idea is that a material can be printed into any object, quickly and easily, and that object can be used in any environment.”

Photoinitiators for Photopolymerization

Photopolymerization is a very general term that relates to any light induced polymerization reaction where an initiating molecule (photoinitiator) induces a chain reaction which combines a large number of monomers or oligomers into a polymer chain. These reactions are referred to as free radical reactions and carried out in three dimensions, such that multiple polymer chains may stack on top of each other (cross-linking) and produce a polymer network. Most resins by themselves are not reactive to light. The photoinitiator has a crucial role in absorbing light energy and reacting with an available element which begins the chain reaction between the resin monomer/oligomer units. (5)

Oxygen Presence

Oxygen is known to be a reactive element, which leads to detrimental effects in free radical polymerization. During a photoinitiation reaction, oxygen will decrease the yield of the initiating species as it bonds with other radicals to produce highly stable compounds that inhibit the growth of a polymer chain. Late studies have indicated that for printing in high viscosity resins, reoxygenation at the layer is much slower, which makes the whole polymerization process easier. On the other hand, low viscosity resins have a rapid reoxygenation time at the layer interface leading to incomplete interfacial layer bonding. (6)

Some reviews have also concluded that the effect of oxygen on layer to layer strength (interfacial strength) is actually promoted by oxygen as it leads to a slower consumption of double bonds at the surface layer**. Because oxygen decelerates the PI reaction, an increase in oxygen will lead to more unconverted bonds, which can react with the subsequent layer subsequently improving layer to layer strength. (7)

At Carbon, the team might use differential scanning calorimetry (DSC) to measures the change of temperature with respect to time to get an estimate for how effective the cure was sine converting double covalent bonds to single bonds generates heat.

Formulation and Characterization

Formulating a new resin is guided by the properties you need. When developing a new material, the Carbon team will make a goal to hit a high elastic modulus, and begin their search by examining the polymer families that can meet that benchmark. As they add properties for their new resin, they narrow down their polymer selection. As with most polymers, the hardest mechanical behavior to make formulation judgements are real-world properties such as UV, polymer aging, and chemical compatibility because they surround the objective of building versatile resins.

This of course, is not everything that goes into the complex science of resin development and there is still a lot to uncover about post-curing processes and curing dose. I’ll save those for another time. For now, there is at least some appreciation for the hard work going on to integrate additive manufacturing in the industry and build access to fully photocurable, bio-compatible, high strength, and maybe one day, fully recyclable resins.

**Zeang Zhao et. al also concluded that, “interfacial strength decreases with curing time and incident light intensity, while the presence of oxygen can significantly improve the strength at the interface. They also found that interfaces with improved strength can be obtained by either decreasing the amount of photoinitiator or by using short chain crosslinkers that can increase the concentration of double bonds.”


(1) Yang, Y., Li, L., & Zhao, J. (2019). Mechanical property modeling of photosensitive liquid resin in stereolithography additive manufacturing: Bridging degree of cure with tensile strength and hardness. Materials & Design, 162, 418-428. doi:10.1016/j.matdes.2018.12.009

(2) Quan, H., Zhang, T., Xu, H., Luo, S., Nie, J., & Zhu, X. (2020). Photo-curing 3d printing technique and its challenges. Bioactive Materials, 5(1), 110-115. doi:10.1016/j.bioactmat.2019.12.003

(3) Kelly, B. E., Bhattacharya, I., Heidari, H., Shusteff, M., Spadaccini, C. M., & Taylor, H. K. (2019). Volumetric additive manufacturing via tomographic reconstruction. Science, 363(6431), 1075-1079. doi:10.1126/science.aau7114

(4) Taormina, G., Sciancalepore, C., Messori, M., & Bondioli, F. (2018). 3D printing processes for photocurable polymeric materials: Technologies, materials, and future trends. Journal of Applied Biomaterials & Functional Materials, 16(3), 151-160. doi:10.1177/2280800018764770

(5) Fouassier, J. P., & Lalevée, J. (2012). Photoinitiators for polymer synthesis: Scope, reactivity, and efficiency. Weinheim: Wiley-VCH.

(6) Lalevée, et al., Radical photopolymerization reactions under air upon lamp and diode laser exposure: The input of the organo-silane radical chemistry, Prog. Org. Coat. (2010), doi:10.1016/j.porgcoat.2010.10.008

(7) Zhao, Z., Mu, X., Wu, J., Qi, H., & Fang, D. (2016, June 01). Effects of oxygen on interfacial strength of incremental forming of materials by photopolymerization. Retrieved April 22, 2021, from

How the Crystal Structure of Carbon-Steel Changes during Tempering

This article is a revision containing my contributions from a laboratory assignment in Mechanical Behaviors of Materials. If you are interested in learning more about this study, please feel free to email me at the Contact Form. Paragraphs with (*) are important to the article, but may have not been explicitly authored by me.

Lab Theory and Important Elements

Heat treatment (tempering) is a metallurgical procedure to enhance the strength and toughness of steel. Tempering requires high temperatures, up to 1000℃, that can be reached with a heat treat oven or kiln. Untempered steel is a body-centered cubic (BCC) martensitic structure. The martensite structure exhibits high strength, but is prone to dislocations and line defects due to the tetragonal crystal structure from interstitial carbon atoms. The body-centered tetragonal crystal structure martensite is formed from austenite (ɣ), a FCC structure. During tempering, the tetragonality of martensite transforms to cubic ferrite as carbon is precipitated from the martensite. These carbons precipitate out as carbides, which limit the motion and density of dislocations in the material (Gensamer et al., 2012). 

(*) Quenching is a rapid cooling process in water or oil to obtain a certain material property that prevents undesired low-temperature processes, such as phase transformations from occurring. It does this by reducing the window of time during which these undesired reactions are both thermodynamically favorable and kinetically accessible. In our case, this induces a Martensite transformation where the steel must be rapidly cooled through its eutectoid point such that the Austenite to be metastable. (*)

In order to achieve high strength and toughness, changes in temperature from quenching (rapid cooling) and heating are balanced to combine material properties from different phases. This process is largely dependent on temperature, but it should be noted that the same phase transitions can be achieved using lower temperatures with the trade off of time. When a specimen is cooled past the Martensite boundary (MS) the phases from tempering are locked into the composition.

Other important compositions to this lab are Bainite, a combination of ferrite and cementite, and Pearlite, a product of the transformation from austenite to ferrite and cementite (Callister, 2020). The two are less brittle than martensite, but weaker in strength compared to martensite (Mazilkin et al., 2008).

Both steel specimens, A36 and 1045, can be classified as hypoeutectoid, since their carbon content is below 0.76%. Therefore, the region of interest in the phase diagram is to the far left of fig. 5. For A36, the carbon content is .25-.29 wt%, while 1045 is .45%-.50 wt%. In the case of A36 steel we used a TTT diagram for eutectoid steel, as one was not readily available for A36 (Fig. 2). Using the diagram for eutectoid steel, the A36 Steel begins as ferrite. When heated it becomes austenite (Fig 2, A). When cooled to room temperature, A36 transitions to martensite. After bead blasting, the specimen is placed in the oven at 400°C, where it becomes 50% Austenite, 50% Bainite. After staying in the oven for one hour, the remaining Austenite is transformed to 100% Bainite, and this is the final composition of the material.

 Fig 5 Phase Transformation Diagram

For the 1045 steel, the composition begins as ferrite. Upon heating to 800℃, the martensite undergoes a phase transformation into bainite. The bainite is then quenched to room temperature, which remains as bainite (Fig.3, B). While the specimen is bead blasted, we assumed a time of roughly 1.5 minutes at room temperature before placement in the lower oven.

The specimen is then heated to 400℃ in the lower oven, where the specimen consists of 50% ferrite and 50% martensite (Fig. 3, D). The specimen is held in the oven for one hour, and finally cooled to room temperature where the final composition is 50% pearlite and 50% martensite (Fig.3, F, G).

From this analysis of the phase transformation for A36, which has a final composition of  100% bainite, and 1045 steel, which has a final composition of 50% pearlite and 50% martensite, we would assume the following characteristics for the material which will be tested in Chapter 2.

  1. A36 should prove to have higher toughness but lower strength, as the composition has shifted from 100% ferrite to 100% bainite, which is interpreted from our theory section in (2). 
  2. 1045 will be stronger and tougher than the A36 specimen, due to the presence of martensite, and tougher due to the presence of pearlite.


Callister, W. D., & Rethwisch, D. G. (2020). Materials science and engineering. Hoboken, NJ: Wiley.

Gensamer, M., Pearsall, E.B., Pellini, W.S. et al. The Tensile Properties of Pearlite, Bainite, and Spheroidite. Metallogr. Microstruct. Anal. 1, 171–189 (2012).

Ismail, N. M., Khatif, N. A. A., Kecik, M. A. K. A., & Shaharudin, M. A. H. (2016). The effect of heat treatment on the hardness and impact properties of medium carbon steel. IOP Conference Series: Materials Science and Engineering, 114, 1–10.

Komvopoulos, K. (2017). Mechanical testing of engineering materials. Cognella. 

Komvopoulos, K. (2021). What happens during steel tempering. Mechanical Behaviors of Materials, 1-11.

Mazilkin, A. A., Straumal, B. B., Protasova, S. G., Dobatkin, S. V., & Baretzky, B. (2008). Structure, phase composition, and microhardness of carbon steels after high-pressure torsion. Journal of Materials Science, 43(11), 3800–3805.