The Race to Zero

This report was prepared from the FujiFilm 2022 Summit. It reviews the astounding data growth in the archive storage space, some frank advice from Silicon Valley’s storage gurus to avoid eminent vertical market failure from outstanding growth in archival data, and innovations driving the future race to zero $/Gb, zero waste, zero carbon footprint storage.

The Zettabyte Era

By 2025, roughly 175 ZBs are projected to be created, 11.7ZBs will be stored, equivalent to 66 years of the Hadron Collider’s experimental data, 125 million years of 1-hour TV shows, 55.3 million LTO-9 cartridges, or 50 million 20TB Hard Disk Drives. If we project the annual 30% CAGR, we’ll enter the Yottabyte Era by 2043.

Archive data consists of 9.0ZBs of the total data stored, roughly 80%, using figures from the IDC. Traditionally, archival data has been an umbrella term for boring stuff: medical documents, corporate compliance documents, emails, oldies movies.

The Mysterious Archive Button in Microsoft Outlook - MailStore

Look to the future: archive will refer to a spectrum of data, from active archive to dark archives, used to store everything from media in the film industry to AI/ML/IOT training data, accessed for a few weeks before moving back down to the lower archive tiers. The divisions between the tiers on the archive spectrum will vary based on the access frequency, with lower tiers growing larger as data ages.

This image has an empty alt attribute; its file name is ykaeJ_ZIWjjCok1z6EGgq_clY4TrEP0dsFhUbvZPFwR4-eCdbwPTl_xVmwdkY3XZoqhu4ymPL0Jie6q5xZiN5Bga_5CpryU3NWsL9FPcTe3H3FVfwSrfYMXbQw1xto4BZ4g-6t86g6vLRNxsuajGSGk

This image has an empty alt attribute; its file name is kNT0vpQ7NIGtrl0urion5d5eQnbrFGRg1OqoHeHhLNmxpC8Oy7BsFs5lhVcI1VQALtpVKbn3e13a9n943AdXccst8-k06NvppTtsP4s63M7_N2tdZxrl-AkJ3-Hw7hPS3j_oueL8UylYq_B1uf07Qrg

Moving data between one layer and another is entirely dependent on the type of tightly coupled or loosely coupled tiered storage architecture. The beginning emergence of the storage tier is the golden copy of the data, whereas the lower tiers (3) are the master copy. 10% of the world’s data is stored in this golden copy tier, the highest performance tier whereas 80% of the data is low activity archival data in the lowest tiers. The greatest challenge is centered in the primary and secondary stages, where data moves from hot to cold. This region is dynamic: neither suitable for performance critical data nor long term data retention. By 2025, this model will shape as the active archive tier, data which is used for three to four weeks for high access and then moved back down to the deep archive.

Storage Medias: SSD, HDD, and Tape: What Stores Old Facebook Posts?

There are two media that serve the archive tiers: tape and HDD. Let’s discuss what their differences are. Note, SSDs are rarely used for archival storage. While SSDs are the highest performance storage media, with data access times between 25 and 100 microseconds, each time a read/write operation is performed (that is, encoding a one or a zero to a transistor) the transistor is damaged. As a result, SSDs are limited by the number of R/W operations which make them unsuitable for a long-term, archival storage solution. They are almost 10x more expensive that HDDs on a $/Gb metric. Tape and HDD serve the archive layer and the majority of hyperscale archival data (80%) is stored on HDDs.

Courtesy of Fred Moore: FujiFilm 2022 Summit

HDD

HDDs are a storage media where ones and zeros are written to either 3.5″ or 2.5″ diameter magnetic disks spinning from 7,200RPM to 15,000RPM. As the disk spins, a read/write head flips the magnetic polarity of grains on the disk, where the direction of the magnetic field vector determines whether a one or a zero was written. At a datacenter, a 3.5″ HDD is mounted to a system for online access, such as a NAS (network attached server), JBOD (just a bunch of disks), or a server. There are only three original HDD manufacturers in the world: Toshiba, Western Digital, and Seagate, who are in a never ending battle to drive to the lowest $/Gb, highest areal density, and best access time–all pressured from supplier partnerships and a complex supply chain impacted by global politics (well, yeah, this is all global supply chains, is it not?).

Tape

Tape, on the other hand, is a predated storage media first developed in the 1960s but standardized in the 1990s by IBM under the LTO form-factor. The tape market is what business folk call a “consolidated market” because there are only two form-factors of tape media, both under the same jurisdiction: IBM. IBM’s parenting style is similar to a tiger mom in the storage industry. IBM manages their own tape form-factor, is the sole manufacturer of tape drives for both LTO and IBM form-factors, and insists on releasing their IBM cartridges at least two years prior to LTO’s. .

Tape drives are a system, and used to read and write to tape cartridges, and stacked together to form a tape library which can store up to half an exabyte. The benefit of using tape, especially for archive, is it’s ability to be disconnected from the network. Simply take out the cartridge, throw it in a box, and there is a physical “air-gap” between that data and the network. The historical approach to tape has been a use case as backup, hierarchical storage managers (HSMs) and media asset management. At some hyperscale operations (S3, Microsoft Azure, Alibaba), tape is used as the primary archive storage media, you can tell because access times can range from 1-12 hours due to the physical tape cartridge being stored in some box off-site in a separate location.

Total Cost of Operations (TCO)

A key metric for deciding between Tape or HDDs for a cloud operation is dependent not only on the access time, average storage capacity per unit, and the offline storage capability, but the Total Cost of Operations (TCO). TCO refers to all the cost associated with media utilization, from raw production to the end of life.

Consider a full-height LTO tape drive, the system used to read and record to a tape cartridge. The average energy usage is 0.031 kWh, with an average life cycle of 6.85 years. Offsetting costs of production, distribution and operational energy, a single LTO tape drive will produce 1.4 metric tons of CO2 per year. For storing 28 PB of archival data for 10 years in a tape based storage solution, 78.1 metric tons of CO2 will be produced using 14 LTO-9 Tape drives and 1500 LTO-9 cartridges using only one frame. That equivalent amount stored on HDDs in a JBOD would produce 1954.3 metric tons of CO2 over 10 years, using 18TB HDDs during the first five-year cycle and 36TB HDDs on the second cycle. Those figures then indicate 10 times more yearly energy consumed using an HDD based system over a tape based system (Brume, IBM)

Right now, you can purchase an 18TB native capacity LTO-9 tape cartridge for $148.95 with 1,035m (more than a kilometer) of magnetic tape inside. HDDs on the other hand, are a higher cost per unit ($529.99 for a WD 20TB SATA Gold) but the areal density (the number of bits per square inch of media) is 3-orders of magnitude higher than tape. Next-gen HDDs suited for archive will approach a whopping 36TB, and rumors have spread of the release of a lower performance 50TB HDD entering the market from Western Digital. These new HDDs will likely be used for the future of some of the archive storage market–specifically that first tier of the archive tier: active–but cannot be used for all archive data, especially as it gets older and colder.

Source: Information Storage Industry Consortium Area Density Chart from Carl Che – WD

It beckons the question: between HDD and tape, what are the best utilities? Where does energy efficiency become the highest concern? And what about data accessed once every 30 or 50 years in the dark archive?

The Deep Dark Archives

Recall that 80% of hyperscale’s archival data is stored on HDDs. Is this truly the best solution? I’m just an intern, but I say no. Here’s why:

If we assume the tiered archive model as valid where the probability of access, the access frequency, and the average value of the data determine the tiers, then the Deep Dark Archives should not be stored on tape nor HDDs. The data stored in the dark archives is data which has almost no value. Our conditions, therefore are: (1) near zero $/Gb (2) near zero carbon footprint (3) near zero product waste. Storage at a hyperscale datacenter accounts for 19% of total datacenter power, and moving cold data from HDDs to tape can dramatically reduce the ten-year CO2e while simultaneously reducing e-waste. There are also substantial TCO savings for migrating cold data to tape, and companies of all sizes are looking to improve sustainability for customer-facing storage.

The Media is the Data, the Data is the Media

It’s important to take a step back now, and recognize the issue at hand which I would go as far to describe as “hoarding”. We are collecting swaths of data, will continue to collect swaths of data, which will continue to demand storage media devices to store our swaths of data. This cycle is wasteful. Is there a solution?

— to be continued…

What is Bought With Money is Bought By Labor

03/08/2022 – Lessons from Adam Smith

Starting with Questions:

In this section, I seek to answer if and why the rules of a productive economy have shifted as the world grows automation capabilities (something which actively removes a worker from the job force) and extends humanity to the stars. Are the rules the same? In a manufacturing center, does anything change when I decide to get rid of my labor force with automation? What if I do that in a developing nation (somewhere that a working class has been characterized as an essential ingredient to building up the economy?) If there is a development timeline, and nations themselves cannot magically jump ahead to the state of “developed”, then will the world as a whole always drag the weight of the impoverished? Is this now a matter of choosing sides? Do I want to help the poor or do I want to push for innovation? Am I robin hood or Elon Musk?

So we can make the argument that countries are poor because the wealthy take or took advantage of them. Those countries still have resources that could lead to a productive economy: they have raw goods, workers, their own currency. Why then, is progress so slow? Is it the corrupt government or is it an inability to properly organize, divide labor, and produce? Is this because they do not know the recipe to the economic engine? Or do they know it, it is just too difficult to implement because the path of development and the flow of money has too much resistance?

From the case of global development, if and why, countries lower on the development totem pole will still struggle, even as the most developed societies take to the stars. Why is it that we can have the capacity to travel to space, but there remains poverty? Is there a poverty cure? Why does the push for innovation surpass the push for development?

What is the proper level of taxes, how should they be levied? Were trading monopolies and colonies a financial boon or a drain on states? Should agricultures make way for industry?

The Notes:

Adam Smith looks to build an economic model based on individual power and liberty

The wealth of a nation depends on the skill and judgement of its workers and on what proportion of the population is employed in such a productive work. What we are really exchanging is the value of our own labour for someone else’s.

An advanced nation from a poorer one is the division of labor, and highly developed cities and towns are strong because of a strong division of tasks and roles. A person concentrates on making one thing only, and does it so efficiently that the return from selling the thing is more than enough to buy commodities which they otherwise would have had to make themselves.

In a well-governed society, the division of labor leads to universal oppulence.

The bigger and more developed the economy, the more specialized the workforce. Big cities are wealthy precisely because of hteir increased division of physical and mental labor. By choosing work that best suits us and gives us a greater chance of gain, society wins by having access to our refined and unique skills or creations. Human beings did not differ much from eacho other, it was in what they produced that set them apart. Yet it can also be said that by giving a monetary value to invention, uniqueness and originality, smith’s ideas helped increase the quantity of each.

He was perhaps forced to admit that self-interest (rather than regard for others) was the essential engine of prosperity.

By seeking our greatest gain, we in fact assure the best allocation of capital for society. Today’s aircraft makers for instance, depend on the extremely specialized skills and machinery of thousands of component makers to produce a single aircraft, which leads to very long lead times between taking the order and final delivery.

A few things to observe regarding a society’s combined stock or capital; 1) that which is immediately consumed 2) Fixed capital (things that produce revenue 3) Circulating capital (money) or any other forms of capital that people can easily get their hands on.

ALL FIXED CAPITAL IS ORIGINALLY DERIVED FROM CIRCULATING CAPITAL AND FIXED CAPITAL CONTINUALLY NEEDS THE INJECTION OF MORE CIRCULATING CAPITAL TO MAINTAIN IT.

Everybody wins from a more productive use of capital. Note that most people (even in a rich country) do not dream of thinking of themselves as the laboring poor, but they still earn a salary based on going to a place of work and doing certain tasks in a certain time period. If they just stopped working, they would have nothing to live off of. Individuals through their frugality might be able to become capitalists themselves by living off of the profits of a business or rent from property or interest on money invested, instead of daily wages from the sweat of their brow.

Wherever capital predominates, industry prevails

21st Century Operations Strategy

A compilation of my notes on Operations Strategy

January 31st, 2022

The world we live in was well predicted by 20th century forecasters, “Reduced barriers to trade are allowing smoother flow of goods, services, capital and labor across geographic boundaries, and thus more efficient allocation of resources globally.” Sara L. Beckman and Donald B. Rosenfield. Operations strategy addresses questions of how a company should structure itself to compete in the complex web-based economy, taking apart major capital decisions and technology adoption to the development of a strong supply base to deliver sophisticated products and services, while remaining competitive in the most developing economies of the world.

Operations includes both manufacturing and service operations, ranging from small to large volume production to continuous flow operations, such as commodity industries. Service operations in the 21st century include healthcare and office work, but also constitute the majority of online retail companies including Amazon and Etsy. The latest economic blunder has facilitated a new placement of operations activities to produce the same amount of required resources with less service.

Global Development Operations Strategy

In a developed economy, such as the United States, economies tend to focus more on allocating resources to service operations. In contrast, developing economies are fixed to agriculture and manufacturing, until segments of the economy increase the standard of living, raising labor costs with it. This beat to the same drum transition has been seen time and time again in global economics. For example, in the “four tigers”–South Korea, Taiwan, Singapore, and Hong Kong, labor remained focused on commodity markets, where the basis of competition is largely built on cost rather than quality. Following an influx of production, labor costs increased significantly–enough so that nations shifted priority to quality goods and innovation.

When manufacturing and service operations in advanced economies are uncompetitive, exchange rate adjustments cause costs to come down and the standard of living is reduced. For less developed societies, investment in manufacturing and service operations provides a mechanism for creating jobs and raising the standard of living.

Should advanced economies outsource their operations to lower cost locations and focus their efforts on gaining a competitive advantage?

February 8th, Vertical Integration

Vertical Integration decisions are one of the most fundamental decisions a company can make, and should be addressed regularly. To address vertical integration, the topic boils down to questions like:

  1. How much of the value chain should we own?
  2. What activities should we perform in house?
  3. For the activities we perform in house, do we have sufficient capacity to meet internal demand?
  4. What conditions should we change the amount of the value chain we own?
  5. Should we direct changes towards the suppliers or towards the customers?

Strategic Factors, Market Factors, PST Factors, and Economic Factors all weigh into Vertical Integration Decisions:

Strategic Factors include whether or not an activity is critical to developing or sustain the core capabilities of a firm.

Market Factors focus on the dynamics of the industry in which the firm resides.

PST Factors (or Product, Service, and Technology) relates to the technology, product and service architecture and product or service development.

Economic Factors balance the costs of owning an activity with the costs of transacting for it instead.

Question: Select a company, explain what vertical integration decisions they have made.

SPACECAL: CAL Optimized for In-Space Manufacturing

This article is from my presentation I prepared for the 2021 VAM (Volumetric Additive Manufacturing) Workshop hosted by Dr. Maxim Shusteff and colleagues, which brought together the global VAM community to share exciting research in the field of volumetric 3D printing. Below is the video presentation as well as my presentation notes, which are provided for readers.

VAM-Workshop-Presentation

Additional Comments

  • Layer-based lithography systems may be incompatible in microgravity
  • We’ve demonstrated overprinting in lab, which could increase spare part diversity
  • The absence of relative motion between the photo-crosslinked object and the surrounding material enables bio-printing in extremely low-storage modulus hydrogels
  • Print times are much faster than layer-by-layer extrusion techniques
  • The print process occurs in a sealed container, which can be transported to the ISS, used, and returned to Earth with minimal human interaction
  • Minimal downtime for cleaning and maintenance as there is no extrusion system
  • Energy availability is a formidable challenge for in-space manufacturing, which means that mechanical motion must be limited to address the challenges of energy shortages.
  • NScrypt/Techshot plans to reduce the organ donor shortage (there are about 113,000 people on transplant waiting lists) creating patient-specific replacement tissues or patches.

Introduction to Photocurable Resin

Nancy Zhang first joined Carbon3D as a staff research scientist. While her first years were primarily focused on tailoring resins for a project with Adidas, she has maintained a versatile role throughout her career. After the successful launch of the Adidas midsoles, Nancy moved to a managerial position in R&D where she focuses on elastomer development and formulation. Her sharp enthusiasm and remarkable knowledge base really shows that I was talking to the right person to learn about resins. She is the type of person who knows what makes a resin, how to achieve ultimate elasticity, strength, biocompatibility, “printability” and ensure that the resin is photocurable, or potentially, recyclable. As one can imagine, that’s a lot of demands for one material. As lead of Material Characterization, it’s no wonder she describes her work as “brain gymnastics”.

Prioritizing the mechanical properties and printability of a material resin is the tip of the iceberg in material characterization. Additive manufacturing provides a platform for versatile product manufacturing which implies material versatility along with it.

Comments from the Author: I first envisioned this article as an interview which I held with Nancy Zhang, an R&D Manager of Material Characterization, several months ago. Nancy helped guide a significant portion of my ongoing additive manufacturing research and while I don’t plan to delve deep into the chemistry, I will include enough to discuss how to make photocurable resins for layer by layer additive techniques. Something I’ve taken particular notice to is that there seem to be few papers that make general formulation assessments tied with the mechanical behavior of photopolymer resins in lithography and it could tie means to the need for a thorough review paper.

“Let’s say I’m selecting a material for a car product. I only need to look at the car and the function of the component. I don’t have to think about the car, where it drives, everything outside the car, and being thrown around in the trunk.”

Layer-by-Layer Lithography Printers

The premise of the SLA/DLP process is to solidify photocurable resin using a light source and sequentially lift the solidified part out of a resin reservoir (vat). SLA (stereolithography apparatus) processes can be top-down systems, where a scanning laser is above the resin reservoir tracing the layer of solidified material. Similarly, DLP (digital light processing) processes are typically bottom-up, in which the light source (a projector) is emitting light into a shallow reservoir from below. When a layer is cured, the component is lifted out of the vat, and another layer is cured.

SLA/DLP processes are similar to FDM (fused deposition modeling) methods only by the object of solidifying material layer by layer. Their similarities stop there. Once a part is completed on an SLA/DLP process, there is typically a second step for post processing. Post-processing is used to clean up any uncured material from the surface of the part. When a part is cured, the excess material can be washed away using acetone or other solvent and placed in a UV or thermal oven to cure any remaining material without introducing new resin.

Lithography is another type of additive manufacturing, and differs from SLA/DLP and FDM methods, as it does not solidify material layer by layer; rather, lithography solidifies a part volumetrically. Lithography printers have demonstrated 30-second print times, micro-scale resolution, and heightened material compatibility for printing with softer materials such as polymeric resins, acrylates, and urethanes to generate biocompatible materials. (1)

Many forms of lithographic printers (see image) have a selective viscosity range for rapid printing. The liquid resin should be able to move quickly throughout the vat. That is, after each layer is cured and raised to the subsequent layer, the surrounding resin should fill the void created from the previous layer. This becomes a problem when printing with higher viscosity resins, as it designates slower print speeds to allow the resin to flow. Some estimates in literature designate a resin viscosity below 5 Pa/s for rapid printing. (1,2) This dependence on resin viscosity for layer-by-layer printing suggest the development of printing techniques which are independent of resin flow. (3)*

Within research into additive manufacturing processes, the majority of research has investigated the mechanical properties of AM parts by considering different process parameters such as post-curing time, layer thickness, and orientation. Most of these studies have investigated the material properties of additively manufactured parts by experimentation, but do not provide an accurate prediction of the mechanical properties of the resin before cure. This information is important, since there are a number of parameters that must be taken in to consideration during resin formulation that affect the over material character. The amount of photoinitiator, oxygen presence, apparent viscosity, the curing dose–these are just a few variables that can be adjusted in formulation that significantly impact the final cured part. (4)

*This is the idea behind computed axial lithography (CAL), which I intend to write about in subsequent articles.

“Imagine Printing a Spring”

The viscosity of a photocurable resin has shown to be positively correlated to the elasticity of the post-cured material. High viscosity resins yield parts with higher “green” strength, the strength measured after a stage 1 cure. When a resin is developed, formulators may plan for lower viscosity or increased green strength, though the latter involves an extended print time, which can inhibit the speed of production.

Printing elastomers is more difficult than printing rigid polymers because of the positive correlation between viscosity and green strength. Elastic resins are characteristically less viscous than high strength polymers, which leads to “sticking” at the print interface from surface tension.

“With [additive manufacturing], the idea is that a material can be printed into any object, quickly and easily, and that object can be used in any environment.”

Photoinitiators for Photopolymerization

Photopolymerization is a very general term that relates to any light induced polymerization reaction where an initiating molecule (photoinitiator) induces a chain reaction which combines a large number of monomers or oligomers into a polymer chain. These reactions are referred to as free radical reactions and carried out in three dimensions, such that multiple polymer chains may stack on top of each other (cross-linking) and produce a polymer network. Most resins by themselves are not reactive to light. The photoinitiator has a crucial role in absorbing light energy and reacting with an available element which begins the chain reaction between the resin monomer/oligomer units. (5)

Oxygen Presence

Oxygen is known to be a reactive element, which leads to detrimental effects in free radical polymerization. During a photoinitiation reaction, oxygen will decrease the yield of the initiating species as it bonds with other radicals to produce highly stable compounds that inhibit the growth of a polymer chain. Late studies have indicated that for printing in high viscosity resins, reoxygenation at the layer is much slower, which makes the whole polymerization process easier. On the other hand, low viscosity resins have a rapid reoxygenation time at the layer interface leading to incomplete interfacial layer bonding. (6)

Some reviews have also concluded that the effect of oxygen on layer to layer strength (interfacial strength) is actually promoted by oxygen as it leads to a slower consumption of double bonds at the surface layer**. Because oxygen decelerates the PI reaction, an increase in oxygen will lead to more unconverted bonds, which can react with the subsequent layer subsequently improving layer to layer strength. (7)

At Carbon, the team might use differential scanning calorimetry (DSC) to measures the change of temperature with respect to time to get an estimate for how effective the cure was sine converting double covalent bonds to single bonds generates heat.

Formulation and Characterization

Formulating a new resin is guided by the properties you need. When developing a new material, the Carbon team will make a goal to hit a high elastic modulus, and begin their search by examining the polymer families that can meet that benchmark. As they add properties for their new resin, they narrow down their polymer selection. As with most polymers, the hardest mechanical behavior to make formulation judgements are real-world properties such as UV, polymer aging, and chemical compatibility because they surround the objective of building versatile resins.

This of course, is not everything that goes into the complex science of resin development and there is still a lot to uncover about post-curing processes and curing dose. I’ll save those for another time. For now, there is at least some appreciation for the hard work going on to integrate additive manufacturing in the industry and build access to fully photocurable, bio-compatible, high strength, and maybe one day, fully recyclable resins.

**Zeang Zhao et. al also concluded that, “interfacial strength decreases with curing time and incident light intensity, while the presence of oxygen can significantly improve the strength at the interface. They also found that interfaces with improved strength can be obtained by either decreasing the amount of photoinitiator or by using short chain crosslinkers that can increase the concentration of double bonds.”

References

(1) Yang, Y., Li, L., & Zhao, J. (2019). Mechanical property modeling of photosensitive liquid resin in stereolithography additive manufacturing: Bridging degree of cure with tensile strength and hardness. Materials & Design, 162, 418-428. doi:10.1016/j.matdes.2018.12.009

(2) Quan, H., Zhang, T., Xu, H., Luo, S., Nie, J., & Zhu, X. (2020). Photo-curing 3d printing technique and its challenges. Bioactive Materials, 5(1), 110-115. doi:10.1016/j.bioactmat.2019.12.003

(3) Kelly, B. E., Bhattacharya, I., Heidari, H., Shusteff, M., Spadaccini, C. M., & Taylor, H. K. (2019). Volumetric additive manufacturing via tomographic reconstruction. Science, 363(6431), 1075-1079. doi:10.1126/science.aau7114

(4) Taormina, G., Sciancalepore, C., Messori, M., & Bondioli, F. (2018). 3D printing processes for photocurable polymeric materials: Technologies, materials, and future trends. Journal of Applied Biomaterials & Functional Materials, 16(3), 151-160. doi:10.1177/2280800018764770

(5) Fouassier, J. P., & Lalevée, J. (2012). Photoinitiators for polymer synthesis: Scope, reactivity, and efficiency. Weinheim: Wiley-VCH.

(6) Lalevée, et al., Radical photopolymerization reactions under air upon lamp and diode laser exposure: The input of the organo-silane radical chemistry, Prog. Org. Coat. (2010), doi:10.1016/j.porgcoat.2010.10.008

(7) Zhao, Z., Mu, X., Wu, J., Qi, H., & Fang, D. (2016, June 01). Effects of oxygen on interfacial strength of incremental forming of materials by photopolymerization. Retrieved April 22, 2021, from https://www.sciencedirect.com/science/article/pii/S2352431616301055

How the Crystal Structure of Carbon-Steel Changes during Tempering

This article is a revision containing my contributions from a laboratory assignment in Mechanical Behaviors of Materials. If you are interested in learning more about this study, please feel free to email me at the Contact Form. Paragraphs with (*) are important to the article, but may have not been explicitly authored by me.

Lab Theory and Important Elements

Heat treatment (tempering) is a metallurgical procedure to enhance the strength and toughness of steel. Tempering requires high temperatures, up to 1000℃, that can be reached with a heat treat oven or kiln. Untempered steel is a body-centered cubic (BCC) martensitic structure. The martensite structure exhibits high strength, but is prone to dislocations and line defects due to the tetragonal crystal structure from interstitial carbon atoms. The body-centered tetragonal crystal structure martensite is formed from austenite (ɣ), a FCC structure. During tempering, the tetragonality of martensite transforms to cubic ferrite as carbon is precipitated from the martensite. These carbons precipitate out as carbides, which limit the motion and density of dislocations in the material (Gensamer et al., 2012). 

(*) Quenching is a rapid cooling process in water or oil to obtain a certain material property that prevents undesired low-temperature processes, such as phase transformations from occurring. It does this by reducing the window of time during which these undesired reactions are both thermodynamically favorable and kinetically accessible. In our case, this induces a Martensite transformation where the steel must be rapidly cooled through its eutectoid point such that the Austenite to be metastable. (*)

In order to achieve high strength and toughness, changes in temperature from quenching (rapid cooling) and heating are balanced to combine material properties from different phases. This process is largely dependent on temperature, but it should be noted that the same phase transitions can be achieved using lower temperatures with the trade off of time. When a specimen is cooled past the Martensite boundary (MS) the phases from tempering are locked into the composition.

Other important compositions to this lab are Bainite, a combination of ferrite and cementite, and Pearlite, a product of the transformation from austenite to ferrite and cementite (Callister, 2020). The two are less brittle than martensite, but weaker in strength compared to martensite (Mazilkin et al., 2008).

Both steel specimens, A36 and 1045, can be classified as hypoeutectoid, since their carbon content is below 0.76%. Therefore, the region of interest in the phase diagram is to the far left of fig. 5. For A36, the carbon content is .25-.29 wt%, while 1045 is .45%-.50 wt%. In the case of A36 steel we used a TTT diagram for eutectoid steel, as one was not readily available for A36 (Fig. 2). Using the diagram for eutectoid steel, the A36 Steel begins as ferrite. When heated it becomes austenite (Fig 2, A). When cooled to room temperature, A36 transitions to martensite. After bead blasting, the specimen is placed in the oven at 400°C, where it becomes 50% Austenite, 50% Bainite. After staying in the oven for one hour, the remaining Austenite is transformed to 100% Bainite, and this is the final composition of the material.

 Fig 5 Phase Transformation Diagram

For the 1045 steel, the composition begins as ferrite. Upon heating to 800℃, the martensite undergoes a phase transformation into bainite. The bainite is then quenched to room temperature, which remains as bainite (Fig.3, B). While the specimen is bead blasted, we assumed a time of roughly 1.5 minutes at room temperature before placement in the lower oven.

The specimen is then heated to 400℃ in the lower oven, where the specimen consists of 50% ferrite and 50% martensite (Fig. 3, D). The specimen is held in the oven for one hour, and finally cooled to room temperature where the final composition is 50% pearlite and 50% martensite (Fig.3, F, G).

From this analysis of the phase transformation for A36, which has a final composition of  100% bainite, and 1045 steel, which has a final composition of 50% pearlite and 50% martensite, we would assume the following characteristics for the material which will be tested in Chapter 2.

  1. A36 should prove to have higher toughness but lower strength, as the composition has shifted from 100% ferrite to 100% bainite, which is interpreted from our theory section in (2). 
  2. 1045 will be stronger and tougher than the A36 specimen, due to the presence of martensite, and tougher due to the presence of pearlite.

References

Callister, W. D., & Rethwisch, D. G. (2020). Materials science and engineering. Hoboken, NJ: Wiley.

Gensamer, M., Pearsall, E.B., Pellini, W.S. et al. The Tensile Properties of Pearlite, Bainite, and Spheroidite. Metallogr. Microstruct. Anal. 1, 171–189 (2012). https://doi.org/10.1007/s13632-012-0027-7

Ismail, N. M., Khatif, N. A. A., Kecik, M. A. K. A., & Shaharudin, M. A. H. (2016). The effect of heat treatment on the hardness and impact properties of medium carbon steel. IOP Conference Series: Materials Science and Engineering, 114, 1–10. https://doi.org/10.1088/1757-899x/114/1/012108

Komvopoulos, K. (2017). Mechanical testing of engineering materials. Cognella. 

Komvopoulos, K. (2021). What happens during steel tempering. Mechanical Behaviors of Materials, 1-11.

Mazilkin, A. A., Straumal, B. B., Protasova, S. G., Dobatkin, S. V., & Baretzky, B. (2008). Structure, phase composition, and microhardness of carbon steels after high-pressure torsion. Journal of Materials Science, 43(11), 3800–3805. https://doi.org/10.1007/s10853-007-2222-5.