The term metallography has several definitions. In the strictest sense, metallography is the study of the structure of metals and metal alloys, typically using magnification by optical or scanning electron electron microscopy. A second widely used definition of metallography is the technique and process of preparing metal samples to reveal and display their internal structure, or, microstructure. For large components a smaller piece, or sample, is cut from the component for preparation. Selecting the appropriate area of the larger part for sampling can be critical relative to the objective of the evaluation since many components do not exhibit a uniform microstructure. The sample is then encapsulated in thermosetting plastic or cold cure epoxy, called a mount or micro, that is typically between 1 inch and 2 inch in diameter. The type of mounting material used depends on the characteristics of the sample configuration and on what aspects of the sample are of interest.
The mount holds the sample in the desired orientation and makes it easier to handle for the next steps in the process – grinding and polishing. Grinding is done using progressively finer grits of abrasives, and must be done carefully to avoid smearing of the metal which obscures (or distorts) its internal structure. Following grinding, the mounted sample is polished to a mirror finish, typically with fine diamond particles suspended in a light oil, and then with even finer aluminum oxide particles (alumina) suspended in filtered water. A great deal of information can be learned by examining the mount in this condition, and photomicrographs are often recorded to document the sample in this “as-polished” condition. However, this process is usually followed by etching of the sample with an acid. Etching is done using a wide variety of acids or combinations of acids depending on the material that is being etched. Etching reveals a vast amount of information about the sample’s microstructure, and through interpretation of that microstructure, the “history” of the material, such as how it was heat treated; what temperatures it was exposed to in service; whether or not it was properly forged, machined, or plated; whether it was exposed to corrosive environments; and other valuable information. This “history,” as revealed by examination on an optical metallurgical microscope (see metallograph), and/or a scanning electron microscope and photographically documented.
The term fracture toughness is used in several ways. When used generically, it refers to the resistance of a crack to grow under stress. This definition is often used to describe the results of fracture toughness tests such as the Charpy impact test. Using the more rigorous definition, fracture toughness refers to strictly defined, mathematically determined results obtained from carefully pre-cracked test specimens that are then failed by the application of a load or force. Fracture toughness results, based on this more rigorous definition, are less common due to the relatively high costs of preparing and testing these specimens. In either case, fracture toughness measurements estimate the stress and the flaw, or crack, size that a structure will tolerate before it fails.
A heat treating process that increases the surface hardness to a part by immersing it in a carbon and nitrogen-rich atmosphere at elevated temperatures. This results in the diffusion of carbon and nitrogen into the surface of the part. The depth of diffusion depends on the temperature and the amount of time that the part is held at that temperature. The amount of carbon and nitrogen entering the part can be controlled by adjusting the amount of carbon and nitrogen (called potential) in the furnace atmosphere. Carbonitriding produces a shallow high hardness surface layer (also called case hardness) and is commonly performed on parts that are thin or have relatively small cross sections and require enhanced wear resistance in service, such as self-tapping screws.
An improperly adjusted carbonitriding atmosphere can result in alterations to a material’s microstructure that can actually decrease its surface hardness. Improper carbonitriding can also produce sub-surface voids or holes in parts. This can significantly reduce fatigue strength.
The deterioration of a material by chemical or electro-chemical interaction with its environment. Corrosion can take on many forms depending on the type of material which is corroded, the stresses the material is subjected to in service while corrosion is occurring, and the environment to which the material is exposed. Examples of the various types of corrosion include uniform (general) corrosion, pitting corrosion (small concentrated corrosion), intergranular corrosion (at the microscopic crystal boundaries of a material), and selective leaching (corrosion of only one element in a multi- element alloy).
A simple formability test in which a strip of metal is bent over a mandrel of specified radius. The bend is then examined for cracks or tears. If present, these cracks or tears indicate a failure if they are greater than a specified length. Bend testing is performed on plate or sheet metal which is manufactured by passing the metal between rollers until the desired thickness is attained. The orientation of the bend test relative to this rolling direction produces quite different results, as rolled materials bend more easily across the rolling direction than parallel to it. Bend testing can be used to predict a material’s suitability to similar bending processes in manufacturing applications. However, it is not a good predictor of a material’s suitability for three dimensional forming processes, such as drawing operations used to form cup-shaped parts, or other three dimensional shapes.
Welds are also tested using a similar procedure called a guided bend test. This test uses one of several types of fixtures to bend the welded test coupon to determine the ductility and integrity of the weld. This test is specified for welder qualification and welding procedure requirements under ASME IX, EN 287 and 288, and ISO 15614 Part 1.
Adhesive wear occurs at the interface between two sliding surfaces. A shaft rotating in a bushing is a good example of two such surfaces. If there is insufficient lubrication between the shaft and the bushing, the resulting friction will cause a buildup of heat. This can lead to elevated temperatures at relatively small localized areas, which are high enough to melt the shaft, the bushing, or both. When this occurs, a microscopic weld is momentarily formed between the shaft and the bushing. These “micro welds” are broken within a fraction of a second by the continued rotation of the shaft, tearing a microscopic piece of metal from either or both parts. This process may occur at hundreds or even thousands of locations, with each revolution of the shaft tearing more and more metal from the parts until they fail.
A test procedure that determines the tensile strength and tensile properties of a material. To perform this test, a bar is machined from the material to be tested. The bar can be machined from an actual part or component or it can be made from stock that will be used to manufacture a component. Test bars vary in size but are generally about six inches in length or smaller, depending on the amount of material available. Round cross section test bars are use to test castings, forgings, wrought bars, and other three dimensional shapes. Flat cross section test bars are used to test plate, sheet, and strip materials.
To perform the test, the bar is placed in the test fixture, with clamps securing each end. Using mechanical or hydraulic force, the bar is then “stretched” or pulled and the “stretching” response of the bar is recorded. The test is usually continued, with the amount of force increased, until the bar breaks. The tensile strength of chain, wire, and wire rope can also be determined by tensile testing as well as the tensile strengths of plastics, rope, and other materials. In addition to tensile strength, tensile testing can determine properties such as yield strength, elongation, and reduction in area.
Stress corrosion cracking (SCC) results from a combination of tensile stress and corrosion. Initiation of stress corrosion cracks usually begins at a small surface corrosion pit (see pitting corrosion) that is subjected to tensile stress. The tensile stress “stretches” the opposite sides of the pit apart which exposes new material at the bottom of the pit to further corrosion. As this “corrosion – tensile stress cycle” continues, the resulting separation grows into a crack which penetrates further and further into the part until complete fracture occurs.
The tensile stress that contributes to SCC is typically significantly lower than that required to produce a tensile fracture, however, the continuing corrosion process weakens the metal at the advancing crack front to the point at which it fractures under this tensile stress.
Naturally, a corrosive environment is required for stress corrosion cracking to occur. This environment may be extremely subtle and can range from mildly acidic rain to the highly concentrated chloride road salts. Different types of material are affected differently by various environments. Stress corrosion cracking occurs in carbon, alloy and stainless steels, from exposure to chlorides. Copper alloys such as brass and bronze are susceptible to SCC in chloride or ammonia environments.
Stress Corrosion Cracking is easily mistaken for other failure modes Analysis of SCC should be performed by engineers with Failure Analysis experience in this fracture mechanism.
A Scanning Electron Microscope (SEM) offers several major advantages over the more common and familial optical microscope. (1) The SEM has higher magnification capabilities than an optical microscope (100,000X compared to 1000X for an optical microscope). (2) The SEM can obtain in-focus images of rough samples which have a large variation in vertical height. In other words, the SEM can focus on both the “peaks” and “valleys” of a rough fracture surface at the same time while an optical microscope can only focus on the “peaks” or “valleys”. This large depth of focus (300 times deeper than an optical microscope) is one of the SEM’s greatest assets. Since it provides 3-D like images of fractures, it allows the analyst to visually identify the fracture type and origin, a critical step in any failure analysis. (3) The illumination source for SEM, a beam of high energy electrons, causes the sample to emit low level x-rays. These x-rays can be used to perform chemical analyses of the sample corresponding to the area viewed on the SEM. By increasing the magnification and thereby illuminating a smaller and smaller area, pinpoint chemical analyses of microscopic features and particles can be performed. This last feature is discussed further under the entry for Energy Dispersive Spectroscopy.
An optical microscope uses light to illuminate a sample for examination. A scanning electron microscope uses a beam of electrons. Sophisticated electronic circuitry is utilized to generate a stable electron beam which is then focused on the sample with electro-magnetic lenses. Additional circuitry transfers the focused image of the sample to a monitor for viewing. Images are then collected and saved in digital format. The beam and sample are under extreme vacuum during the examination process.
The AMRAY 1830i Scanning Electron Microscope is shown at the top of the page (right). The image to the immediate right, while not a typical materials engineering subject, demonstrates the high magnification and depth of focus capabilities of the SEM. The object at the upper left (A) is a human red blood cell. A white blood cell is shown at left center (B). The smaller spherical object at lower left is a bacteria. Magnification of this image is 20,000X.
A hardness testing technique in which an indenter is pressed into a test sample by a weight or load. The indenter contacts the surface of the test sample upon the application of a light pre-load, called the minor load. This “sets” the indenter in the sample and determines the starting point of indenter penetration. A heavier major load is then applied. This pushes the indenter into the test sample. The indenter is then withdrawn and the distance to which it has penetrated is measured and used to calculate a Rockwell hardness number. The shape of the indenter and the amount of weight applied as the major load varies depending on the material which is being tested. The indenter may be a cone-shaped diamond or a 1/16″ diameter metal ball, and the major load may range from 15 to 150 kilograms. These variations are identified by a letter designation, for example, Rockwell A, B, C, etc. or a number/letter combination such as 15N, 15T, or 30N.
Residual stresses are internal forces contained within a part after the original source of those stresses has been removed. Typical sources of stress include loads applied in deforming operations such as bending, forging or extruding, and temperature gradients such as those encountered in casting and welding. When a metal part is permanently deformed, as in bending or forging, these residual stresses are deposited into the part. Similarly, expansion and contraction from temperatures encountered in casting and welding also deposit residual stresses. If no further change is made to the part, these residual stresses may simply remain contained within the part with little or no affect. However, if a section of the part is machined away or if the part is heated, these stresses can be re-distributed in a manner which will cause the part to distort. This distortion can result in misaligned bores, threaded holes, and bearing surfaces.
Residual stresses are cumulative. In other words, if a beam has the capacity to carry a load of 1000 pounds and contains residual stresses in the same orientation as that load carrying capacity of 100 pounds, then any load applied to the beam that exceeds 900 pounds will overload (100 pounds residual + 900 pound load = 1000 pounds) the beam’s carrying capacity and cause it to fail. This is an obvious factor to consider in the design and manufacture of load carrying parts.
The microstructure of a material determines its properties. The understanding and modification of microstructure is, in many respects, the foundation of materials analysis and engineering. Like all matter, metals are composed of atoms. These atoms combine in small clusters which are called crystals. Groups of crystals combine to form grains. The size, shape, orientation and combination of the grains with other compositional elements in metals make up their microstructure.
These factors also govern their physical properties such as tensile strength, fatigue strength, hardness, brittleness, corrosion resistance, machinability, weldability and many other critical characteristics. These characteristics can be modified and selectively optimized by a variety of processes including heat treating, alloying, cold working and others.
A classic example of microstructure modification to dramatically enhance a materials properties is the conversion of gray iron to ductile iron. Gray iron is the most common form of cast iron. It is inexpensive, reasonably strong and hard. But it is also very brittle. If bent or stretched, it easily breaks. This lack of “give”, defined as low ductility and quantified by the amount a material will elongate or stretch before breaking, required the substitution of more expensive steel in components subjected to bending or stretching (tensile) loads in service.
The microstructure of gray iron (above left) consists of laminations of carbide and iron called Pearlite (A), relatively pure iron called Ferrite (B), and a high level of carbon in the form of graphite flakes (C). In the late 1940’s a new form of cast iron was developed called Ductile Iron. A measured amount of magnesium is added to the molten iron minutes before it is poured. This produces the microstructure shown above at the right, which also contains Pearlite (D) and Ferrite (E), but with the graphite converted from flakes to spheres (F).
This microstructural change dramatically improves Ductile Irons ability to accept bending and tensile loads compared to Gray Iron. While Gray Iron will elongate only 0.06% before breaking, Ductile Iron will elongate 18%.
Hydrogen embrittlement fractures occur when a metal absorbs hydrogen from an external source. There are numerous potential sources of hydrogen in both the manufacturing process and service environment. These include moist corrosion, arc welding with damp electrodes, acid pickling or cleaning solutions, and electroplating baths containing hydrochloric acid. In order for a hydrogen embrittlement fracture to occur, a part which has absorbed hydrogen must be subjected to tensile stress. Within a relatively short period of time (usually 48 hours or less) from the first application of this stress, fracture will occur. The mechanism by which hydrogen embrittlement fracture occurs is relatively simple. Individual hydrogen atoms, which even by atomic standards are extremely small, diffuse into the metal at the grain boundaries, which are inherent to metallic microstructure. When the part is stressed as occurs, for example when a bolt is tightened, the microscopic gaps between the grains widen slightly. When this occurs, the hydrogen atoms become mobile, moving along the grain boundaries, and when two atoms meet they combine to form a hydrogen molecule. The amount of volume that a single hydrogen molecule occupies is many times greater than that of two individual hydrogen atoms. This increased volume results in pressure between adjacent grains which literally “pushes” the grains apart, resulting in fracture.
Hydrogen embrittlement typically occurs in relatively high strength materials with a hardness of Rockwell HRC 32 or greater. A frequently encountered example might be a plated high strength bolt which has absorbed hydrogen in the electroplating process.
Hydrogen embrittlement fractures are very similar in appearance to intergrannular fractures resulting from other causes. Specific microscopic features, however, differentiate this failure mode when the fracture is examined using a scanning electron microscope. Identification of these features by an experienced materials engineer is critical to an accurate finding of Hydrogen Embrittlement in the course of a failure analysis.
Fatigue is the most common type of fracture in engineered components. Fatigue fractures are also particularly dangerous because they can occur under normal service conditions, with no warning that a progressively growing crack is developing until the final catastrophic failure. The component, whether it’s the outer aluminum skin of a commercial jet or a simple tubular chair leg, often appears to be perfectly sound with no visible distortion to warn of impending failure.
A technical understanding of fatigue requires a comprehensive knowledge of metallurgy, physics, and phenomena like plastic deformation, slip planes and dislocation theory. In fact, there are several competing theories on exactly what happens at a microscopic level when a fatigue crack initiates. But a practical understanding of the process is extremely beneficial and has direct application to its prevention, and the manufacturing environment, as discussed below.
To the non-technically inclined, the term “fatigue” suggests this type of failure is related to the age of a component, that the material is “tired”. In fact, fatigue fracture can occur within hours of a component going into service. Conversely, even large, highly stressed components can operate for decades with no fatigue cracking or failure.
Fatigue fractures result from repeated, or cyclic, stresses. These stresses can take a variety of forms, such as bending (in one direction), reverse bending (back and forth in two directions), torsion (twisting in one or more axis) and rotation. Regardless of the variation in direction, the stress on the component at the point of fatigue fracture is always tensile stress, in which the fracture initiation site is being “stretched”, or pulled in opposite directions. To illustrate this, visualize a tube which is being repeatedly bent in one direction. The side of the tube that is concave when it is bent is being compressed. The side of the tube which is convex is being “stretched”, or subjected to a tensile stress. This is the side on which a fatigue crack will initiate.
Fatigue cracks initiate at stresses below the tensile strength of the material. Tensile strength is the stress, or load, at which a material breaks when pulled in two opposing directions. This load is a specific value for each metal alloy, varying somewhat depending on heat treating and other processing operations. These values are widely available in engineering reference manuals, typically expressed as pounds per square inch in American references. The fact that fatigue cracks can occur at stress levels below the tensile strength of a material is difficult to explain. Theories on this focus on physical and structural changes at the microscopic (0.0001″ or less) area of crack initiation.
Fatigue is a progressive fracture mechanism. Once a fatigue crack initiates, it is driven further into the component with each stress cycle. This crack growth process continues as long as the component is subjected to cyclic stress. Depending on the magnitude and frequency of these stresses, the crack may grow over time ranging from hours to years. Eventually, the crack advances to a point where the remaining intact cross section of the component can not sustain the next cyclic stress load – “the straw that breaks the camels back” – and complete fracture of the component occurs.
In the “real world” fatigue usually – that’s usually, not always – initiates at a location that acts as a stress concentration, or focal point, to the stresses imposed on a component. Stress concentrations take a wide variety of forms. They include geometric features (such as holes, slots, corners and radii), rough areas of surface finish, welds, corrosion pits, and microstructural defects such as inclusions.
The exception to “usually”, the cases where fatigue fractures initiate from component surfaces that are free of stress concentrations, typically result from one of two causes; under-design of the component, or abusive service conditions. Just as all materials have an ultimate tensile strength, they also have a fatigue strength, sometimes called the fatigue limit or endurance limit. Once a component is subjected to cyclic stresses that exceed this limit, fatigue fracture occurs. Fatigue failures of this type are less common than fatigue failures initiating from stress concentrations. Usually components are intentionally over-designed to deal with stresses several times greater than what they would be subjected to in service as a safety margin.
Fatigue crack initiation is the critical factor in fatigue fractures. If the initiation stage can be prevented, fatigue fracture will not occur. It sounds so obvious and simple. It’s not. As noted above, initiation is the most complex stage of fatigue fracture. A low magnitude load, which would have no effect whatsoever on a component in a single application, can be devastating when repeatedly applied as thousands or millions of cycles. The cumulative effect of these cyclic loads are microscopic “shifts” in the material’s structure which ultimately produce a “dislocation” – at this scale it is too small to be called a crack – and the focal point of stress concentration is born. Corners, holes, rough surface finish, welds and other features only accelerate the process. To further complicate the issue, vibration harmonics, dampening of the system, and the environment in which the component functions add a large unknown factor. Collectively, these affects become difficult to predict.
A comprehensive failure analysis, performed by experienced metallurgical or materials engineers is crucial to identifying the true root cause of the initiation of fatigue fractures. To be of value, the failure analysis must identify the cause of initiation and practical cost effective options that will prevent future fatigue failures.
A chemical analysis technique often used to analyze samples, or features on samples, which are too small for other types of analysis.
EDS identifies the elements present in a sample and determines their relative percentages. Amounts as low as 1/10th of one percent can be detected. EDS is a “Mass Spectroscopy” technique which identifies all the detectable elements present in a sample, rather than only specific elements requested by the analyst as is common in many other chemical analysis techniques. (see Scanning Electron Microscope)
A hardness test in which a ball shaped indenter (normally 10 mm in diameter) is pressed into the material to be tested by a specified load ranging from 500 to 3000 kilograms. The diameter of the resulting indentation in the tested material is measured and this dimension is converted to a Brinell Hardness number using a mathematical formula or reference table. Brinell Hardness indentations are relatively large, making this an ideal technique for measuring the hardness of materials which are not uniform or homogeneous such as cast iron. This large indentation, however, along with the high load used, makes Brinell Hardness Testing impractical for small or thin materials or parts on which the resulting indentation is unacceptable for cosmetic or dimensional reasons.
The primary objective of the annealing heat treating process is to soften a part or component. Annealing is performed by heating to a specific temperature, holding at that temperature for a specific time, then slowly cooling to room, or ambient, temperature. Temperature and time vary depending on the metal or alloy.
Many industrial metals become hard and brittle when their form or shape is changed during the manufacturing process. Examples of processes that harden and embrittle metals include drawing wire to smaller diameter, forging parts to desired shapes by hammering, and stamping sheet metal to specific shapes. Periodic annealing prevents tearing or fracture of the part in the manufacturing process by returning it to a softened condition. Once annealed, continued shaping processes can be applied to the part.
Annealing is also used to relieve internal stresses that result from manufacturing processes (stress relieve) and to refine microstructure to obtain beneficial material properties.
Abrasive wear is a cutting process. This may seem counter-intuitive since most perceptions of cutting require a tool, such as a scissors, chisel, or machine tool. Abrasive wear, however, is cutting on a microscopic scale. The “tool” is either contaminant particles from an outside source, or a mating component.
Wear of mining or excavation machinery shovels and buckets from ore, rock or gravel is an example of Two-Body Abrasive Wear – the two bodies being the shovel and the geological material.
Hard contaminant particles between two sliding or rolling components produces Three-Body Abrasive Wear – the three bodies being each of the two sliding or rolling components, and the hard particles. Typical examples of Three Body Abrasive Wear are bearings, bushings or pistons which have been contaminated by sand, corrosion product (rust particles), or wear particles resulting from small fractures in these components. Hard particles can also originate from within one or both of the components in the form of carbides in their microstructure, or glass reinforcing fibers in plastics. Three Body Abrasive Wear typically accelerates rapidly since more particles are generated from the sliding or rolling components which are additive to the outside contaminant particles.