The Race to Zero

This report was prepared from the FujiFilm 2022 Summit. It reviews the astounding data growth in the archive storage space, some frank advice from Silicon Valley’s storage gurus to avoid eminent vertical market failure from outstanding growth in archival data, and innovations driving the future race to zero $/Gb, zero waste, zero carbon footprint storage.

The Zettabyte Era

By 2025, roughly 175 ZBs are projected to be created, 11.7ZBs will be stored, equivalent to 66 years of the Hadron Collider’s experimental data, 125 million years of 1-hour TV shows, 55.3 million LTO-9 cartridges, or 50 million 20TB Hard Disk Drives. If we project the annual 30% CAGR, we’ll enter the Yottabyte Era by 2043.

Archive data consists of 9.0ZBs of the total data stored, roughly 80%, using figures from the IDC. Traditionally, archival data has been an umbrella term for boring stuff: medical documents, corporate compliance documents, emails, oldies movies.

The Mysterious Archive Button in Microsoft Outlook - MailStore

Look to the future: archive will refer to a spectrum of data, from active archive to dark archives, used to store everything from media in the film industry to AI/ML/IOT training data, accessed for a few weeks before moving back down to the lower archive tiers. The divisions between the tiers on the archive spectrum will vary based on the access frequency, with lower tiers growing larger as data ages.

This image has an empty alt attribute; its file name is ykaeJ_ZIWjjCok1z6EGgq_clY4TrEP0dsFhUbvZPFwR4-eCdbwPTl_xVmwdkY3XZoqhu4ymPL0Jie6q5xZiN5Bga_5CpryU3NWsL9FPcTe3H3FVfwSrfYMXbQw1xto4BZ4g-6t86g6vLRNxsuajGSGk

This image has an empty alt attribute; its file name is kNT0vpQ7NIGtrl0urion5d5eQnbrFGRg1OqoHeHhLNmxpC8Oy7BsFs5lhVcI1VQALtpVKbn3e13a9n943AdXccst8-k06NvppTtsP4s63M7_N2tdZxrl-AkJ3-Hw7hPS3j_oueL8UylYq_B1uf07Qrg

Moving data between one layer and another is entirely dependent on the type of tightly coupled or loosely coupled tiered storage architecture. The beginning emergence of the storage tier is the golden copy of the data, whereas the lower tiers (3) are the master copy. 10% of the world’s data is stored in this golden copy tier, the highest performance tier whereas 80% of the data is low activity archival data in the lowest tiers. The greatest challenge is centered in the primary and secondary stages, where data moves from hot to cold. This region is dynamic: neither suitable for performance critical data nor long term data retention. By 2025, this model will shape as the active archive tier, data which is used for three to four weeks for high access and then moved back down to the deep archive.

Storage Medias: SSD, HDD, and Tape: What Stores Old Facebook Posts?

There are two media that serve the archive tiers: tape and HDD. Let’s discuss what their differences are. Note, SSDs are rarely used for archival storage. While SSDs are the highest performance storage media, with data access times between 25 and 100 microseconds, each time a read/write operation is performed (that is, encoding a one or a zero to a transistor) the transistor is damaged. As a result, SSDs are limited by the number of R/W operations which make them unsuitable for a long-term, archival storage solution. They are almost 10x more expensive that HDDs on a $/Gb metric. Tape and HDD serve the archive layer and the majority of hyperscale archival data (80%) is stored on HDDs.

Courtesy of Fred Moore: FujiFilm 2022 Summit

HDD

HDDs are a storage media where ones and zeros are written to either 3.5″ or 2.5″ diameter magnetic disks spinning from 7,200RPM to 15,000RPM. As the disk spins, a read/write head flips the magnetic polarity of grains on the disk, where the direction of the magnetic field vector determines whether a one or a zero was written. At a datacenter, a 3.5″ HDD is mounted to a system for online access, such as a NAS (network attached server), JBOD (just a bunch of disks), or a server. There are only three original HDD manufacturers in the world: Toshiba, Western Digital, and Seagate, who are in a never ending battle to drive to the lowest $/Gb, highest areal density, and best access time–all pressured from supplier partnerships and a complex supply chain impacted by global politics (well, yeah, this is all global supply chains, is it not?).

Tape

Tape, on the other hand, is a predated storage media first developed in the 1960s but standardized in the 1990s by IBM under the LTO form-factor. The tape market is what business folk call a “consolidated market” because there are only two form-factors of tape media, both under the same jurisdiction: IBM. IBM’s parenting style is similar to a tiger mom in the storage industry. IBM manages their own tape form-factor, is the sole manufacturer of tape drives for both LTO and IBM form-factors, and insists on releasing their IBM cartridges at least two years prior to LTO’s. .

Tape drives are a system, and used to read and write to tape cartridges, and stacked together to form a tape library which can store up to half an exabyte. The benefit of using tape, especially for archive, is it’s ability to be disconnected from the network. Simply take out the cartridge, throw it in a box, and there is a physical “air-gap” between that data and the network. The historical approach to tape has been a use case as backup, hierarchical storage managers (HSMs) and media asset management. At some hyperscale operations (S3, Microsoft Azure, Alibaba), tape is used as the primary archive storage media, you can tell because access times can range from 1-12 hours due to the physical tape cartridge being stored in some box off-site in a separate location.

Total Cost of Operations (TCO)

A key metric for deciding between Tape or HDDs for a cloud operation is dependent not only on the access time, average storage capacity per unit, and the offline storage capability, but the Total Cost of Operations (TCO). TCO refers to all the cost associated with media utilization, from raw production to the end of life.

Consider a full-height LTO tape drive, the system used to read and record to a tape cartridge. The average energy usage is 0.031 kWh, with an average life cycle of 6.85 years. Offsetting costs of production, distribution and operational energy, a single LTO tape drive will produce 1.4 metric tons of CO2 per year. For storing 28 PB of archival data for 10 years in a tape based storage solution, 78.1 metric tons of CO2 will be produced using 14 LTO-9 Tape drives and 1500 LTO-9 cartridges using only one frame. That equivalent amount stored on HDDs in a JBOD would produce 1954.3 metric tons of CO2 over 10 years, using 18TB HDDs during the first five-year cycle and 36TB HDDs on the second cycle. Those figures then indicate 10 times more yearly energy consumed using an HDD based system over a tape based system (Brume, IBM)

Right now, you can purchase an 18TB native capacity LTO-9 tape cartridge for $148.95 with 1,035m (more than a kilometer) of magnetic tape inside. HDDs on the other hand, are a higher cost per unit ($529.99 for a WD 20TB SATA Gold) but the areal density (the number of bits per square inch of media) is 3-orders of magnitude higher than tape. Next-gen HDDs suited for archive will approach a whopping 36TB, and rumors have spread of the release of a lower performance 50TB HDD entering the market from Western Digital. These new HDDs will likely be used for the future of some of the archive storage market–specifically that first tier of the archive tier: active–but cannot be used for all archive data, especially as it gets older and colder.

Source: Information Storage Industry Consortium Area Density Chart from Carl Che – WD

It beckons the question: between HDD and tape, what are the best utilities? Where does energy efficiency become the highest concern? And what about data accessed once every 30 or 50 years in the dark archive?

The Deep Dark Archives

Recall that 80% of hyperscale’s archival data is stored on HDDs. Is this truly the best solution? I’m just an intern, but I say no. Here’s why:

If we assume the tiered archive model as valid where the probability of access, the access frequency, and the average value of the data determine the tiers, then the Deep Dark Archives should not be stored on tape nor HDDs. The data stored in the dark archives is data which has almost no value. Our conditions, therefore are: (1) near zero $/Gb (2) near zero carbon footprint (3) near zero product waste. Storage at a hyperscale datacenter accounts for 19% of total datacenter power, and moving cold data from HDDs to tape can dramatically reduce the ten-year CO2e while simultaneously reducing e-waste. There are also substantial TCO savings for migrating cold data to tape, and companies of all sizes are looking to improve sustainability for customer-facing storage.

The Media is the Data, the Data is the Media

It’s important to take a step back now, and recognize the issue at hand which I would go as far to describe as “hoarding”. We are collecting swaths of data, will continue to collect swaths of data, which will continue to demand storage media devices to store our swaths of data. This cycle is wasteful. Is there a solution?

— to be continued…

What is Bought With Money is Bought By Labor

03/08/2022 – Lessons from Adam Smith

Starting with Questions:

In this section, I seek to answer if and why the rules of a productive economy have shifted as the world grows automation capabilities (something which actively removes a worker from the job force) and extends humanity to the stars. Are the rules the same? In a manufacturing center, does anything change when I decide to get rid of my labor force with automation? What if I do that in a developing nation (somewhere that a working class has been characterized as an essential ingredient to building up the economy?) If there is a development timeline, and nations themselves cannot magically jump ahead to the state of “developed”, then will the world as a whole always drag the weight of the impoverished? Is this now a matter of choosing sides? Do I want to help the poor or do I want to push for innovation? Am I robin hood or Elon Musk?

So we can make the argument that countries are poor because the wealthy take or took advantage of them. Those countries still have resources that could lead to a productive economy: they have raw goods, workers, their own currency. Why then, is progress so slow? Is it the corrupt government or is it an inability to properly organize, divide labor, and produce? Is this because they do not know the recipe to the economic engine? Or do they know it, it is just too difficult to implement because the path of development and the flow of money has too much resistance?

From the case of global development, if and why, countries lower on the development totem pole will still struggle, even as the most developed societies take to the stars. Why is it that we can have the capacity to travel to space, but there remains poverty? Is there a poverty cure? Why does the push for innovation surpass the push for development?

What is the proper level of taxes, how should they be levied? Were trading monopolies and colonies a financial boon or a drain on states? Should agricultures make way for industry?

The Notes:

Adam Smith looks to build an economic model based on individual power and liberty

The wealth of a nation depends on the skill and judgement of its workers and on what proportion of the population is employed in such a productive work. What we are really exchanging is the value of our own labour for someone else’s.

An advanced nation from a poorer one is the division of labor, and highly developed cities and towns are strong because of a strong division of tasks and roles. A person concentrates on making one thing only, and does it so efficiently that the return from selling the thing is more than enough to buy commodities which they otherwise would have had to make themselves.

In a well-governed society, the division of labor leads to universal oppulence.

The bigger and more developed the economy, the more specialized the workforce. Big cities are wealthy precisely because of hteir increased division of physical and mental labor. By choosing work that best suits us and gives us a greater chance of gain, society wins by having access to our refined and unique skills or creations. Human beings did not differ much from eacho other, it was in what they produced that set them apart. Yet it can also be said that by giving a monetary value to invention, uniqueness and originality, smith’s ideas helped increase the quantity of each.

He was perhaps forced to admit that self-interest (rather than regard for others) was the essential engine of prosperity.

By seeking our greatest gain, we in fact assure the best allocation of capital for society. Today’s aircraft makers for instance, depend on the extremely specialized skills and machinery of thousands of component makers to produce a single aircraft, which leads to very long lead times between taking the order and final delivery.

A few things to observe regarding a society’s combined stock or capital; 1) that which is immediately consumed 2) Fixed capital (things that produce revenue 3) Circulating capital (money) or any other forms of capital that people can easily get their hands on.

ALL FIXED CAPITAL IS ORIGINALLY DERIVED FROM CIRCULATING CAPITAL AND FIXED CAPITAL CONTINUALLY NEEDS THE INJECTION OF MORE CIRCULATING CAPITAL TO MAINTAIN IT.

Everybody wins from a more productive use of capital. Note that most people (even in a rich country) do not dream of thinking of themselves as the laboring poor, but they still earn a salary based on going to a place of work and doing certain tasks in a certain time period. If they just stopped working, they would have nothing to live off of. Individuals through their frugality might be able to become capitalists themselves by living off of the profits of a business or rent from property or interest on money invested, instead of daily wages from the sweat of their brow.

Wherever capital predominates, industry prevails

21st Century Operations Strategy

A compilation of my notes on Operations Strategy

January 31st, 2022

The world we live in was well predicted by 20th century forecasters, “Reduced barriers to trade are allowing smoother flow of goods, services, capital and labor across geographic boundaries, and thus more efficient allocation of resources globally.” Sara L. Beckman and Donald B. Rosenfield. Operations strategy addresses questions of how a company should structure itself to compete in the complex web-based economy, taking apart major capital decisions and technology adoption to the development of a strong supply base to deliver sophisticated products and services, while remaining competitive in the most developing economies of the world.

Operations includes both manufacturing and service operations, ranging from small to large volume production to continuous flow operations, such as commodity industries. Service operations in the 21st century include healthcare and office work, but also constitute the majority of online retail companies including Amazon and Etsy. The latest economic blunder has facilitated a new placement of operations activities to produce the same amount of required resources with less service.

Global Development Operations Strategy

In a developed economy, such as the United States, economies tend to focus more on allocating resources to service operations. In contrast, developing economies are fixed to agriculture and manufacturing, until segments of the economy increase the standard of living, raising labor costs with it. This beat to the same drum transition has been seen time and time again in global economics. For example, in the “four tigers”–South Korea, Taiwan, Singapore, and Hong Kong, labor remained focused on commodity markets, where the basis of competition is largely built on cost rather than quality. Following an influx of production, labor costs increased significantly–enough so that nations shifted priority to quality goods and innovation.

When manufacturing and service operations in advanced economies are uncompetitive, exchange rate adjustments cause costs to come down and the standard of living is reduced. For less developed societies, investment in manufacturing and service operations provides a mechanism for creating jobs and raising the standard of living.

Should advanced economies outsource their operations to lower cost locations and focus their efforts on gaining a competitive advantage?

February 8th, Vertical Integration

Vertical Integration decisions are one of the most fundamental decisions a company can make, and should be addressed regularly. To address vertical integration, the topic boils down to questions like:

  1. How much of the value chain should we own?
  2. What activities should we perform in house?
  3. For the activities we perform in house, do we have sufficient capacity to meet internal demand?
  4. What conditions should we change the amount of the value chain we own?
  5. Should we direct changes towards the suppliers or towards the customers?

Strategic Factors, Market Factors, PST Factors, and Economic Factors all weigh into Vertical Integration Decisions:

Strategic Factors include whether or not an activity is critical to developing or sustain the core capabilities of a firm.

Market Factors focus on the dynamics of the industry in which the firm resides.

PST Factors (or Product, Service, and Technology) relates to the technology, product and service architecture and product or service development.

Economic Factors balance the costs of owning an activity with the costs of transacting for it instead.

Question: Select a company, explain what vertical integration decisions they have made.

SPACECAL: CAL Optimized for In-Space Manufacturing

This article is from my presentation I prepared for the 2021 VAM (Volumetric Additive Manufacturing) Workshop hosted by Dr. Maxim Shusteff and colleagues, which brought together the global VAM community to share exciting research in the field of volumetric 3D printing. Below is the video presentation as well as my presentation notes, which are provided for readers.

VAM-Workshop-Presentation

Additional Comments

  • Layer-based lithography systems may be incompatible in microgravity
  • We’ve demonstrated overprinting in lab, which could increase spare part diversity
  • The absence of relative motion between the photo-crosslinked object and the surrounding material enables bio-printing in extremely low-storage modulus hydrogels
  • Print times are much faster than layer-by-layer extrusion techniques
  • The print process occurs in a sealed container, which can be transported to the ISS, used, and returned to Earth with minimal human interaction
  • Minimal downtime for cleaning and maintenance as there is no extrusion system
  • Energy availability is a formidable challenge for in-space manufacturing, which means that mechanical motion must be limited to address the challenges of energy shortages.
  • NScrypt/Techshot plans to reduce the organ donor shortage (there are about 113,000 people on transplant waiting lists) creating patient-specific replacement tissues or patches.

SPACECAL Research Project Report

Design for Manufacturing, Taylor Labs, University of California, Berkeley

Authored by Tristan W Schwab, Undergraduate of Mechanical Engineering April 30, 2021

The scope of this report is to discuss the progress of the SpaceCAL Project aimed towards the Zero-Gravity flight test in the Spring of 2022 and my contributions to the project and Design for Manufacturing Group during the Spring semester of 2021.

Figure 1

NASA Tech Flight Proposal

The purpose of SpaceCAL is to develop a compact enclosure containing 5 parallel computed axial lithography (CAL) printers to be tested in suborbital testing with the Zero-Gravity Flight Demonstration. The system is projected to fly in the Spring of 2022 and complete several varying viscosity resin prints.

SpaceCAL is inherently a technology demonstration of the current CAL technology developed at the University of California, Berkeley in microgravity; however, alternative scopes of research may apply. Suggestions have been raised to the enhancement of research in additive manufacturing for in space manufacturing, including: 1) Part strength versus fluid velocity through shadowgraph analysis in resin vials. 2) In situ automated post-processing of CAL prints. 3) Testing of a temperature control system in a space environment for low-gravity resin printing. 4) Tracking low-viscosity polymerization in low-gravity. 5) Performance of CAL for micro-gravity bioprinting.

The SpaceCAL project can ultimately demonstrate not only the unique abilities of CAL AM technology but shows potential to provide invaluable research in the growing field of in-space manufacturing.

As mentioned, the SpaceCAL project is scheduled to fly in the Spring of 2022 (next year). During the launch of the project in January, primary focus has been development of an enclosure to contain the CAL system, electronics, optics, and hardware considerations, and the design of a compact and exchangeable vial stack (fig. 1, right). The planned design has evolved since the initial Flight Tech Proposal. First, original documentation discussed three sets of vial stacks, containing 22 hydrogels, 11 high, medium and low viscosity resins. The current design now contains 5 vial stacks of 5 vials each. Change of the number of vial stacks and vials affects the number of resins which will be used in flight. Second, the project will no longer consider the use of “Schlieren imaging to record video data of the refractive index history”, but rather, Shadowgraph methods due to the high sensitivity of Schlieren imaging.

Current State of SpaceCAL and Contributions

SpaceCAL exists in an early design stage on Solidworks. Full purchase orders for the primary assembly include hardwire, including but not limited to 8020 beams, linear guide rails, projectors, optics, and electronics.

The author was established as the Mechanical sub team lead, tasked with rebuilding the first iteration of the SpaceCAL system on Autodesk Fusion 360. The author rebuilt an 8020 enclosure (Frame Assembly) and performed finite element analysis on Fusion 360 to analyze g-loads on the frame specified by Zero-Gravity.

The author used these analyses to confirm the material selection of the frame assembly and add triangulated supports the frame.

Progress then shifted to rebuilding the SpaceCAL system on Fusion 360 (due to the limited capabilities of Fusion 360 and parametric modeling). The author collaborated with Grad Student Lead, Joe Toombs, to develop a parametric sketch in Solidworks and develop the second version frame assembly. Current focus has now turned to the development of optic setups for particle tracing, continuing summer research positions in the Design for Manufacturing group, assembling the SpaceCAL system, and collaborating with fellow future Cal Grad Student, Taylor Waddel, on mechatronics and software. Future project objectives are expected to shift towards resin formulation and characterization.

The SpaceCAL project certainly arrives at a remarkable time for the College of Engineering at Berkeley to be involved in the growing excitement for exploration and industry in space.

Visualizing Stress Distribution in 3D Printed Lattices

The first portion of this article showcases my final project in PDF format. My first prototype is shown below the PDF.

Click here to view the final prototype video.

TristanWSchwabFinalProject

First and Second Prototype

Project Description:

For the initial prototype of this project, I demonstrate the unique complacency of lattice structures designed and optimized through NTopology and manufactured on an Ender3 Pro and Formlabs3 using elastic materials. I showcase my design process, my thought process in building lattices in NTopology, and my process to build an interface to visualize force distribution through a lattice.

Click here to see the prototype demonstration. 

Designing a Lattice:

There are multiple platforms for designing lattices. I selected NTopology, a software used in the industry and readily available on a student license. NTopology is unique for its interface with AM and easy “block” UI, which proved to be very efficient when learning about different lattice structures and adjusting parameters.

Figure 1: Fluorite and Body Centered Cubic Lattice Structures generated in NTopology after importing a CAD Body

Printing Process:

The first CAD bodies that I designed had small voids which I envisioned could house the force sensitive resistors. This idea would have likely worked, but printing on my Ender3 in TPU, an elastic filament, proved that printing any of the lattices with the support structures for the voids was not easily scalable, and unnecessarily overcomplicated the design. Ultimately, those prints did not turn out to be the best, and I decided that generating a simple rectangular prism lattice without voids would be the best solution. 

Unfortunately, the files I sent to the Jacobs Center to print on their FormLabs SLA printer were unusable. I wish I had given more thought to the initial design so that I could use an SLA print for a comparable demonstration, but I submitted my initial design prematurely. This print was made in elastic 50A resin, which may have been a bit too elastic for the purposes of this project.

Figure 2: Example of TPU failed lattice print with voids
Figure 3: Failed SLA print.

It turns out that the best print is the simplest design. This cannot be more true when it comes to printing elastic lattices, which fundamentally behave like springs. I tried a variety of lattice designs, Weire-Phelan, Kelvin Cell, Isotruss, Fluorite, etc… all of these lattices are nearly impossible to manufacture without support structures. I prioritized visualizing the cells themselves, so printing at a low density was a heavy consideration. The best lattice for my purposes was the Body Centered Cubic Design, which does not present overhangs greater than 45 degrees which would necessitate printing supports. 

Circuit Design

To begin my circuit design, I set up one FSR embedded in a lattice which I had on hand. Once I generated the right print, I tried embedding two FSRs accompanied with LEDs. I was having some trouble with getting the LEDs and FSRs to stay connected with the jumper wires, so I decided to solder end tips to all of them and plug them into the female-male jumper wires and wrap them in electrical tape for final presentation. 

Figure 3: Front and back faces of body-centered cubic lattice embedded with LEDs and FSRs.

Final Setup

When I was initially applying a force to the front face of the lattice, it was actually expanding outward as a result of the Poisson’s Effect. It turns out that this effect was so great that the corner FSRs were not picking up a force, since the upper face of the slots I cut out from the lattice were lifting upward. To counteract this, I built a foam board frame to contain the lattice. This made the entire setup a bit more portable and pleasing to look at, though it took away the side view of the lattice and made it more difficult to see the bottom face force response. 

I’ll be honest, this setup looks like a jumble of wires and trying to make the wires clean and orderly was an extremely difficult aspect of this project. After a few tries, I decided to separate the wiring for the left and right sides of the lattice, and this effectively cut my odds of miswiring in half. I also color coded the jumper wires as practice to improve visibility for myself and viewers.

Figure 6: Side view of foam board slots for FSRs and LEDs.
Figure 7: Split wiring from left and right sides.
Figure 8: Arduino board using all analog ports. Voltage source from battery pack.

Figure 4: Of course, the code.
Figure 9: Side view of foam board slots for FSRs and LEDs

Introduction to Photocurable Resin

Nancy Zhang first joined Carbon3D as a staff research scientist. While her first years were primarily focused on tailoring resins for a project with Adidas, she has maintained a versatile role throughout her career. After the successful launch of the Adidas midsoles, Nancy moved to a managerial position in R&D where she focuses on elastomer development and formulation. Her sharp enthusiasm and remarkable knowledge base really shows that I was talking to the right person to learn about resins. She is the type of person who knows what makes a resin, how to achieve ultimate elasticity, strength, biocompatibility, “printability” and ensure that the resin is photocurable, or potentially, recyclable. As one can imagine, that’s a lot of demands for one material. As lead of Material Characterization, it’s no wonder she describes her work as “brain gymnastics”.

Prioritizing the mechanical properties and printability of a material resin is the tip of the iceberg in material characterization. Additive manufacturing provides a platform for versatile product manufacturing which implies material versatility along with it.

Comments from the Author: I first envisioned this article as an interview which I held with Nancy Zhang, an R&D Manager of Material Characterization, several months ago. Nancy helped guide a significant portion of my ongoing additive manufacturing research and while I don’t plan to delve deep into the chemistry, I will include enough to discuss how to make photocurable resins for layer by layer additive techniques. Something I’ve taken particular notice to is that there seem to be few papers that make general formulation assessments tied with the mechanical behavior of photopolymer resins in lithography and it could tie means to the need for a thorough review paper.

“Let’s say I’m selecting a material for a car product. I only need to look at the car and the function of the component. I don’t have to think about the car, where it drives, everything outside the car, and being thrown around in the trunk.”

Layer-by-Layer Lithography Printers

The premise of the SLA/DLP process is to solidify photocurable resin using a light source and sequentially lift the solidified part out of a resin reservoir (vat). SLA (stereolithography apparatus) processes can be top-down systems, where a scanning laser is above the resin reservoir tracing the layer of solidified material. Similarly, DLP (digital light processing) processes are typically bottom-up, in which the light source (a projector) is emitting light into a shallow reservoir from below. When a layer is cured, the component is lifted out of the vat, and another layer is cured.

SLA/DLP processes are similar to FDM (fused deposition modeling) methods only by the object of solidifying material layer by layer. Their similarities stop there. Once a part is completed on an SLA/DLP process, there is typically a second step for post processing. Post-processing is used to clean up any uncured material from the surface of the part. When a part is cured, the excess material can be washed away using acetone or other solvent and placed in a UV or thermal oven to cure any remaining material without introducing new resin.

Lithography is another type of additive manufacturing, and differs from SLA/DLP and FDM methods, as it does not solidify material layer by layer; rather, lithography solidifies a part volumetrically. Lithography printers have demonstrated 30-second print times, micro-scale resolution, and heightened material compatibility for printing with softer materials such as polymeric resins, acrylates, and urethanes to generate biocompatible materials. (1)

Many forms of lithographic printers (see image) have a selective viscosity range for rapid printing. The liquid resin should be able to move quickly throughout the vat. That is, after each layer is cured and raised to the subsequent layer, the surrounding resin should fill the void created from the previous layer. This becomes a problem when printing with higher viscosity resins, as it designates slower print speeds to allow the resin to flow. Some estimates in literature designate a resin viscosity below 5 Pa/s for rapid printing. (1,2) This dependence on resin viscosity for layer-by-layer printing suggest the development of printing techniques which are independent of resin flow. (3)*

Within research into additive manufacturing processes, the majority of research has investigated the mechanical properties of AM parts by considering different process parameters such as post-curing time, layer thickness, and orientation. Most of these studies have investigated the material properties of additively manufactured parts by experimentation, but do not provide an accurate prediction of the mechanical properties of the resin before cure. This information is important, since there are a number of parameters that must be taken in to consideration during resin formulation that affect the over material character. The amount of photoinitiator, oxygen presence, apparent viscosity, the curing dose–these are just a few variables that can be adjusted in formulation that significantly impact the final cured part. (4)

*This is the idea behind computed axial lithography (CAL), which I intend to write about in subsequent articles.

“Imagine Printing a Spring”

The viscosity of a photocurable resin has shown to be positively correlated to the elasticity of the post-cured material. High viscosity resins yield parts with higher “green” strength, the strength measured after a stage 1 cure. When a resin is developed, formulators may plan for lower viscosity or increased green strength, though the latter involves an extended print time, which can inhibit the speed of production.

Printing elastomers is more difficult than printing rigid polymers because of the positive correlation between viscosity and green strength. Elastic resins are characteristically less viscous than high strength polymers, which leads to “sticking” at the print interface from surface tension.

“With [additive manufacturing], the idea is that a material can be printed into any object, quickly and easily, and that object can be used in any environment.”

Photoinitiators for Photopolymerization

Photopolymerization is a very general term that relates to any light induced polymerization reaction where an initiating molecule (photoinitiator) induces a chain reaction which combines a large number of monomers or oligomers into a polymer chain. These reactions are referred to as free radical reactions and carried out in three dimensions, such that multiple polymer chains may stack on top of each other (cross-linking) and produce a polymer network. Most resins by themselves are not reactive to light. The photoinitiator has a crucial role in absorbing light energy and reacting with an available element which begins the chain reaction between the resin monomer/oligomer units. (5)

Oxygen Presence

Oxygen is known to be a reactive element, which leads to detrimental effects in free radical polymerization. During a photoinitiation reaction, oxygen will decrease the yield of the initiating species as it bonds with other radicals to produce highly stable compounds that inhibit the growth of a polymer chain. Late studies have indicated that for printing in high viscosity resins, reoxygenation at the layer is much slower, which makes the whole polymerization process easier. On the other hand, low viscosity resins have a rapid reoxygenation time at the layer interface leading to incomplete interfacial layer bonding. (6)

Some reviews have also concluded that the effect of oxygen on layer to layer strength (interfacial strength) is actually promoted by oxygen as it leads to a slower consumption of double bonds at the surface layer**. Because oxygen decelerates the PI reaction, an increase in oxygen will lead to more unconverted bonds, which can react with the subsequent layer subsequently improving layer to layer strength. (7)

At Carbon, the team might use differential scanning calorimetry (DSC) to measures the change of temperature with respect to time to get an estimate for how effective the cure was sine converting double covalent bonds to single bonds generates heat.

Formulation and Characterization

Formulating a new resin is guided by the properties you need. When developing a new material, the Carbon team will make a goal to hit a high elastic modulus, and begin their search by examining the polymer families that can meet that benchmark. As they add properties for their new resin, they narrow down their polymer selection. As with most polymers, the hardest mechanical behavior to make formulation judgements are real-world properties such as UV, polymer aging, and chemical compatibility because they surround the objective of building versatile resins.

This of course, is not everything that goes into the complex science of resin development and there is still a lot to uncover about post-curing processes and curing dose. I’ll save those for another time. For now, there is at least some appreciation for the hard work going on to integrate additive manufacturing in the industry and build access to fully photocurable, bio-compatible, high strength, and maybe one day, fully recyclable resins.

**Zeang Zhao et. al also concluded that, “interfacial strength decreases with curing time and incident light intensity, while the presence of oxygen can significantly improve the strength at the interface. They also found that interfaces with improved strength can be obtained by either decreasing the amount of photoinitiator or by using short chain crosslinkers that can increase the concentration of double bonds.”

References

(1) Yang, Y., Li, L., & Zhao, J. (2019). Mechanical property modeling of photosensitive liquid resin in stereolithography additive manufacturing: Bridging degree of cure with tensile strength and hardness. Materials & Design, 162, 418-428. doi:10.1016/j.matdes.2018.12.009

(2) Quan, H., Zhang, T., Xu, H., Luo, S., Nie, J., & Zhu, X. (2020). Photo-curing 3d printing technique and its challenges. Bioactive Materials, 5(1), 110-115. doi:10.1016/j.bioactmat.2019.12.003

(3) Kelly, B. E., Bhattacharya, I., Heidari, H., Shusteff, M., Spadaccini, C. M., & Taylor, H. K. (2019). Volumetric additive manufacturing via tomographic reconstruction. Science, 363(6431), 1075-1079. doi:10.1126/science.aau7114

(4) Taormina, G., Sciancalepore, C., Messori, M., & Bondioli, F. (2018). 3D printing processes for photocurable polymeric materials: Technologies, materials, and future trends. Journal of Applied Biomaterials & Functional Materials, 16(3), 151-160. doi:10.1177/2280800018764770

(5) Fouassier, J. P., & Lalevée, J. (2012). Photoinitiators for polymer synthesis: Scope, reactivity, and efficiency. Weinheim: Wiley-VCH.

(6) Lalevée, et al., Radical photopolymerization reactions under air upon lamp and diode laser exposure: The input of the organo-silane radical chemistry, Prog. Org. Coat. (2010), doi:10.1016/j.porgcoat.2010.10.008

(7) Zhao, Z., Mu, X., Wu, J., Qi, H., & Fang, D. (2016, June 01). Effects of oxygen on interfacial strength of incremental forming of materials by photopolymerization. Retrieved April 22, 2021, from https://www.sciencedirect.com/science/article/pii/S2352431616301055