The air headspace in steel circulating tanks can become rusty over time. Rust is a hard contaminant that can cause abrasion, promote oxidation and emulsify water. If there are no other options to control water ingression and rust, consider coating the area of the tank exposed to air and water condensation with a thin layer of grease that is compatible with the circulating oil. This should be done after a system drain and tank cleaning. The grease can be easily applied with a clean squeegee.

Sampling oil from machines at rest and failing to properly resuspend particles just prior to analysis are two common oil analysis-related problems that are often dismissed by users and laboratories. Find out how these issues can lead to bad oil analysis data.

Take Particle Settling and Oil Sample Agitation Seriously

When you throw a rock in a lake, it goes down – fast. Wear particles are heavier than rocks of the same size, often four to five times heavier. Of course, the heavier the object, the faster it falls. Oil is viscous, and this resistance can slow down the rate objects fall, but it doesn’t come close to stopping them.

The rate at which objects fall in viscous fluids is described by Stokes’ law. In sum, (1) the larger the object, (2) the heavier the object (density), (3) the thinner the fluid (lower viscosity), (4) the lower the density of the fluid (oil has extremely low density), the faster the object falls. Conversely, small, low-density objects in highly viscous fluids settle more slowly.

In oil analysis, this is critical because you want to know about the particles in your oil – all of the particles, including those that can damage machines and those that reveal damage has already occurred and is continuing to occur. Not much in oil analysis is more important than this.

This article will address two common oil analysis-related problems that sadly are often dismissed by both users and laboratories. These problems are sampling oil from machines at rest (oil not circulating) and failing to properly resuspend particles just prior to analysis.


Figure 1

Analyzing the Data

In reviewing research on particle sedimentation, I discovered that all of the data points to the same general conclusion: particles are very instable in lubricating oils, even those that are completely invisible to the naked eye.


Figure 2

In Figures 1-4, you can see how Stokes’ law governs the speed of particle descent. Figure 1 reveals that 30-micron iron particles can descend at a rate of 2 centimeters per minute. The viscosity is rather low, but then again this is not uncommon for very warm oils and oils that have been diluted by solvents prior to lab analysis.

Figure 2 shows that iron particles settle at a rate roughly five times the speed of dirt particles (silica). In Figure 3, the data indicates that barely visible 100-micron steel particles can fall 4 inches in viscous 15W40 motor oil in less than one minute. Finally, in Figure 4, you see that extremely small 20-micron Babbitt particles (say, from journal bearings) settle ½ inch in just four minutes in ISO VG 32 turbine oil.


Figure 3


Figure 4

Dormant Fluid Causes Unrepresentative Oil Samples

Too often oil samples are taken when machines are at rest (not when they are running). Sometimes this is avoidable, but sometimes it is not. Live oil samples are always best. Circulation keeps the fluid homogenous at the time the sample is taken. Lack of fluid circulation causes particle settling and sedimentation (see Figures 1-4). The longer the delay between when a machine is turned off (stopping oil movement) and when the oil is sampled, the greater the number of particles not received in the sample bottle.

Particles are like data. This data provides important information that can prescribe a needed corrective action. When particles settle out of the oil, you lose data. This lost data may prevent you from being aware of an abnormally high particle count or advanced machine wear condition. This would produce a false negative in the oil analysis results. This means the oil’s condition may be falsely reported to be better than reality.

Evidence of particle sedimentation shows up in oil sumps and reservoir bottoms. Sampling the bottom of the sump or reservoir provides little help since the sludge and sediment that accumulate there are a repository of data spanning weeks, months or even years. This is not representative of the current conditions, including the health of the oil, the contaminant level of the oil and the active rate of machine wear.

The obvious solution is to use the live-zone sampling technique. If this is not possible, the sample documentation or label must disclose that the bottle contains a cold or dormant fluid sample. This will be taken into account by the analyst when the data is interpreted.

Not all tests are influenced by dormant fluid samples. Figure 5 shows the properties of highest and lowest risk.


Figure 5

Particle Settling in Bottles and Glassware

I’ve toured a large number of oil analysis labs in my career, including several in the last couple of years. From my observations, the vast majority of these labs seem to downplay the importance of proper sample agitation or have the misconception that their in-house method is adequate. In fact, proper agitation is seen, but it is rare. Standards such as ASTM D7647 and ISO 11500, which provide guidance on the proper use of particle counters, clearly emphasize the importance of agitation. For example, the following is from ASTM D7647:

“Homogenize the incoming sample by shaking the sample container and its contents in the mechanical shaker. For samples 200 mL or less, shake for one minute. For samples 200 mL or larger, shake for three minutes.”

Several years ago I supervised a very basic study on sample agitation. Four identical samples were prepared using a standardized test dust (mostly silica) as the contaminant. To obtain a baseline, one of the four samples was analyzed using a freshly calibrated particle counter (control sample). The results showed 1,658 particles greater than 10 microns per milliliter (see Figure 6). The remaining three samples were then allowed to rest overnight.


Figure 6

The following day, Sample 1 was analyzed without agitation. The results showed 29 particles or less than 2 percent of the particle count found in the control sample. Sample 2 was analyzed after five minutes of vigorous hand agitation. This produced 1,287 particles or 78 percent of the control sample. Finally, Sample 3 was analyzed after five minutes in a paint shaker. The results were nearly identical to the control sample.

The importance of agitation was stressed in a National Fluid Power Association report jointly authored by Caterpillar Inc., Hiac/Royco and Butler Machinery. The report stated that “the sample extracted from the sample bottle for analysis must be representative of the whole bottle … Since settling and aggregation can drastically affect the measured particle count, the samples are shaken vigorously before analysis for three to five minutes on a paint shaker … This breaks up aggregates and disperses the particles uniformly.”

Influence of Aged Samples

Samples that have been left undisturbed for prolonged periods of time (e.g., more than one week) drop out insoluble impurities and suspended particles. These include dirt, wear debris, water, sludge, oxide insolubles, glycol, dead additives, carbon insolubles, friction polymers and certain additives. After aging, particles and impurities exhibit coherent forces that result in agglomerations in the form of sludge or microscopy clusters. Particles also are known to stick tightly to bottle and container surfaces. The longer they stay undisturbed, the tighter they adhere (by Van der Waals forces and electrostatic forces). These agglomerations and adherent forces vary inversely with particle size; that is, smaller particles are more difficult to redisperse than larger particles.

Oil Sample Preparation Pointer

Some oil samples and laboratory tests have higher risks than others. These would include low viscosity fluids (e.g., ISO VG 46 and below), large particles (greater than 20 microns), oils with high varnish potential, heavy particles, aged samples, ferrous density tests, particle counting and wear particle analysis. The best ways to mitigate these risks are delineated in the list of laboratory do’s and don’ts that follows:

Laboratory Do’s

  • Place samples in an ultrasonic bath prior to mechanical agitation for at least 30 seconds (especially with aged samples).
  • Agitate samples vigorously in a mechanical shaker for no less than three minutes (bottle ullage no less than 25 percent).
  • Degas the sample (ultrasonic followed by vacuum works best) immediately after agitation. (Degas procedure required only for particle counting.)
  • Analyze samples immediately after degas.

Laboratory Don’ts

  • Use hand agitation.
  • Utilize a laboratory rocker, orbital oscillator or roll device (e.g., hot dog roller) for agitation (except to keep a previously mechanically agitated sample fresh).
  • Attempt to mechanically agitate completely full bottles.
  • Dilute samples with thin solvents. If dilution is required, use ultraclean oil instead. Low viscosity solvents accelerate particle drop-out.
  • Wait several minutes while the sample sits motionless before analysis.
  • Assume no one cares about agitation.

A study on the need for sample agitation was sponsored by the Naval Air Engineering Center using analytical ferrography. The researchers found that even large iron and steel particles did not resuspend completely with aged samples when agitated by hand. In their study, when samples were agitated by a paint shaker for two minutes, approximately 47 percent more particles appeared in the 55-millimeter entry region of ferrogram slides compared to samples agitated vigorously by hand for 30 seconds. Likewise, 59 percent more particles appeared at the 10-millimeter region on ferrogram slides (representing smaller particles).

The best way to break up these agglomerates and dislodge particles from container walls with aged samples is to use an ultrasonic bath followed by vigorous mechanical agitation (e.g., a paint shaker) for three minutes. Some procedures call for use of an ultrasonic bath after mechanical agitation to aid in the coalescence of air bubbles before vacuum degas. Perhaps the best method is to sonicate both before and after mechanical agitation.

While it may be wishful thinking on my part to expect samples to be properly collected in the plant and properly agitated in the lab, I am hopeful. I also realize that proper agitation comes at a cost. However, there is also a cost to bad oil analysis data. Remember, the very oil analysis tests that need agitation the most are the ones the labs charge the most for and the ones that provide critical information on machine health.

Answer: Generally, the same oil that is to be used in the system, unless adherent surface deposits have formed, requiring the use of solvent or detergents.

Cutting oils can be reused several times and are typically designed for this purpose once processed through reclamation equipment. A number of methods can be used for the recycling of these fluids. Discover which is best for your application.

“What are the safety measures for the disposal of cutting oil? Is it possible to process and reuse it?”

In general, cutting oils can be reused several times and are typically designed for this purpose once processed through reclamation equipment. This is also the case with many other types of lubricants such as hydraulic fluids.

Reclamation is necessary with cutting fluids because they can degrade after a period of use due to the working and environmental contaminants to which they are exposed. Even slight mixtures of cutting fluids with other types of fluids and oils will cause them to degrade.

A number of methods and equipment are available for the recycling of cutting fluids, including skimmers, coalescers, centrifuges, settling tanks, magnetic separators and filtration systems.

Skimmers are used to remove tramp oil, which is a contaminated portion of the cutting fluid. The tramp oil floats to the top and is pushed off using a collection belt.

Coalescers and centrifuges can also remove tramp oil as well as solid contaminants. Coalescers promote the fusing together of the tramp oil into larger droplets, which will then naturally rise to the top of the surface more rapidly to be skimmed off. Centrifuges spin the fluid to generate gravitational forces and help separate solids and tramp oil from the normal cutting fluid.

Settling tanks, magnetic separators and filtration systems can remove solid contaminants to varying degrees and efficiencies. Magnetic separators are effective for extracting ferrous particles, while settling tanks are ideal for collecting larger and heavier particles that readily fall to the bottom. Filtration systems trap solid contaminants as the fluid passes through filter media.

After several uses and reclamation cycles, eventually the cutting fluid is destined for disposal. When that time comes, disposing of the fluid must be done with care. Although environmental regulations can be tedious to follow, they are very important.

Tests must be conducted to determine whether the cutting fluid is non-hazardous. This will depend on the fluid’s properties, including its ignitability, corrosivity, reactivity and toxicity. Disposal of non-hazardous fluid may be fairly simple and inexpensive. If the fluid is deemed hazardous, it may need to be taken to a treatment facility.

The U.S. Environmental Protection Agency (EPA) has strict guidelines for a hazardous waste, including its treatment, storage and disposal. Always be sure to check your local regulations and facility’s policies as well.

To make a manufacturing process more efficient, a company must understand what lean is. To “go lean” means your workplace applies lean manufacturing philosophy and practices. Lean is an industrial practice where manufacturing facilities focus on waste reduction to create more value for the customer. There are several different lean techniques, allowing each organization to fit lean into its own distinct production process. Three of the most common lean techniques are 5S, kaizen and kanban.

5S

The 5S system is an organizational method that stems from five Japanese words: seiri, seiton, seiso, seiketsu and shitsuke. These words translate to sort, set in order, shine, standardize and sustain. They represent a five-step process to reduce waste and increase productivity and efficiency. The first step, sort, involves eliminating clutter and unnecessary items from the workspace. Next, workers must set in order by ensuring that there is a place for everything and everything is in its place. The shine step entails cleaning the workspace and regularly maintaining this state. Standardizing should be done to make all work processes consistent so any worker can step in and perform a job if necessary. The final step, sustain, involves maintaining and reinforcing the previous four steps.

Kaizen

Kaizen is a business practice that focuses on making continuous improvements. With kaizen, there is always room for improvement, and workers should constantly look to improve the workplace. This philosophy also emphasizes that each individual’s ideas are important and that all employees should be involved in the process to better the company. An organization that practices kaizen welcomes and never criticizes suggestions for improvement at all levels. This helps to create an environment of mutual respect and open communication.

Kanban

Kanban relies on visual signals to control inventory. A kanban card can be placed in a visible area to signal when inventory needs to be replenished. With this process, products are assembled only when there is demand from the consumer, which allows companies to reduce inventory and waste. The kanban method is highly responsive to customers because products can be manufactured by responding to customer needs instead of trying to predict their future needs.

Lean manufacturing has many advantages, such as higher productivity, improved customer service, lower lead times, increased employee morale and a safer work environment. Each of these contributes to the most significant benefit of lean manufacturing — increased profits.

One of the basic tenets of manufacturing is there is never a shortage of challenges. When times are good, the challenge is how to make more on-time deliveries with the least cost possible. In this current economic environment, the challenge is how to make on-time deliveries with the resources available. Regardless of the economic environment, on-time delivery of profitable products that meet the customer expectations is the objective.

As companies reduce employees, it gets more difficult to meet the objective. People with the knowledge of certain tasks leave the business, just as those tasks seem to become more critical to meeting customer expectations. Some tasks are now performed by people who have to be trained to do those tasks or are done by people who have not done the task for a number of years.

One of the simplest ways to make sure tasks can be performed simply and easily by a number of individuals is through workplace organization. This does not necessarily require new equipment, but rather the tools required to do the task are organized in such a way as to be easy to identify, obvious in their proper function in the process and noticeable if not returned to their proper place.

Organization is not only about the physical workplace. Done properly, it also organizes the mental approach to the task. There are many techniques used to accomplish this, from suspending the tools from above the workstation to shadow boards and foam inserts. The best solution is determined by the task and how the operator needs to access the tools. The point is to lay out the workstation in order to make it simpler for the operator and easier for him or her to perform the task properly (rather than improperly).

With the proper workplace organization, it becomes easier to do the task and easier to train new operators.

Currently, one of the most popular approaches to workplace organization is 5-S. This approach was originally developed as a part of the body of knowledge now labeled as lean manufacturing. You can find out more about it in the book “Five Pillars of the Visual Workplace” by Hiroyuki Hirano. In English, the five steps are sort, straighten, standardize, shine/clean and sustain.

When you are attempting to accelerate a lean turnaround or for that matter any implementation, the first item on your list should be to stop running on overload. Remember, it’s not a matter of time but rather a matter of priorities. Define your goals and needs correctly, and create realistic completion dates as well as clear-cut plans. Time will take care of itself. Running by the seat of your pants creates numerous project breakdowns and a constant firefighting effort that will propagate throughout the organization.

A lean transformation in your company may be the best way to create the scenario you need for success. You need to instill processes and culture, manage events, and work at warp speed. Lean has a structure that is well-defined — a proven commodity that you can follow in a step-by-step basis. Don’t let poor planning be your downfall.

Develop Culture

Lean is about people. The culture you need to develop is about bringing the decision down to the individual level. This is an important part of working at warp speed. It will also readily identify those who are not on the team and not willing to make the commitments that you need to survive. People who don’t care will not commit. On-boarding is extremely important in a turnaround, and working at warp speed will only magnify those who are not committed.

Maps and Metrics

The lean tools of current state, value stream and future state mapping, along with the metrics you determine, will provide the realistic guidelines you need to stop the constant firefighting and running on overload. Using these tools will also identify your customer needs and eliminate waste within your organization quickly and efficiently.

Inject Kaizen

Holding regular kaizen events that are well-structured will actually reduce the amount of meetings needed and provide clarity to the entire organization. So many times when firefighting is taking place, there seems to be a constant flow of meetings that accomplish nothing. A kaizen event incorporating the lean principles of continuous improvement creates the feeling that there is always room for improvement. Meetings gain a much higher level of importance.

Lean will produce higher quality, reduce waste and generate greater efficiencies. As a result, improved profitability will be achieved. This sounds like a pretty good formula for a turnaround. Can it be done quick enough to survive?

The way you accelerate any process is to do things concurrently, so segment your business and employ the lean principles as identified above in each of them. Use the four key areas of the balanced scorecard as a guideline for segmentation in a small business:

1) Financial: How do we look to shareholders? (Include bankers, vendors, etc.)

2) Customers: How do customers see us? (Sales and marketing)

3) Internal Process: What must we excel at? (Operations)

4) Innovation and Learning: Can we continue to improve and create value? (Training)

Theoretically, utilizing these four areas and the tools described above should allow you to accelerate your turnaround four times faster. There are a couple of keys to this type of deployment:

First, create a balanced scorecard for the organization in order to provide clarity. I use the one-page business plan as my tool of choice, and I would utilize it for each segment that is created.

Second, training is extremely important but seldom incorporated in a turnaround. The reason most often given is that there’s not enough time or money. However, without the skillset to drive decision-making downward in the organization, much of what you are trying to accomplish will fail. Think of the adage that you must give in order to get.

Finally, lean experience is another critical factor. Few people have ever experienced a turnaround or the amount of slowness in the economy that we are experiencing now. It will be much easier to find a lean practitioner or consultant. This person or organization must participate with each of the segments to ensure that the overall objectives are compatible. In the beginning, there will be quite a bit of involvement. Soon, that participation will be reduced to training and the kaizen events.

Lean transformation can be done very effectively and efficiently. I did not even touch on the tools of 5-S and numerous others that can effectively reduce vast amounts of waste. If you were in this situation, what tools would you use?

A company that is moving from an old way of doing things to applying lean principles is said to be going through a lean transformation.

A lean transformation is more than just eliminating waste. It involves changing a culture. It requires changing your thinking. It means changing your relationships with your customers and suppliers. A lean transformation is a complete transformation of your business, and it is going to be difficult to accomplish.

“When it works, it is a frighteningly powerful competitive advantage, but to accomplish it and keep it going is very difficult.”

In the forward to the same book, James Womack writes:

“Why is lean thinking and lean manufacturing so challenging to implement? It is not — as many early commentators believed — a set of isolated techniques, but a complete business system, a way of designing, selling and manufacturing complex products that requires the cooperation of thousands of people and hundreds of independent organizations. A successful ‘lean leap’ (lean transformation) requires ‘change agent’ leadership, a sensei (teacher) to demonstrate the techniques, a long-term commitment to the workforce to inspire their best efforts, proactive development of the supply base, aggressive management of distribution and sales system to smooth demand, and a score-keeping system (accounting methods plus individual compensation) that motivates managers to do the right thing every time.”

A lean transformation is really about people, and people changing how they do their jobs. The way to get started is to start small, let employees become comfortable with and confident in the changes a lean transformation brings, and build support for future changes. This usually means starting with the low hanging fruit, the easiest things to do that also bring the largest benefits.

A Lean Transformation Example

Let’s say you own a company that makes widgets. These widgets are usually black and come with two buttons. Lately, customers have been ordering widgets in other colors such as red and blue. So once a month, your company does a special production run to produce the red and blue widgets. Of course, you charge more for these special orders.

One day the sales manager comes into your office. One of your largest customers has just switched to buying widgets from your competitor, ABC Widgets. The competitor can supply any color widget, put the customer’s logo on it and deliver them in two days. Their price is also 10 percent less than your price.

“That’s impossible!” you say. “They’ll soon go out of business.”

But over the next few months, more and more of your customers switch to buying from your competitor. Not only is ABC Widgets delivering custom widgets at a lower cost, they are guaranteeing better than a 99-percent on-time delivery, have higher quality and have just come out with a new lightweight three-button widget. What happened?

ABC Widgets has undergone a lean transformation.

Elements of a Lean Transformation

What changed? How is ABC Widgets able to produce high-quality custom widgets, sell them at a lower price, deliver them quickly and still be able to innovate and develop new widget designs?

There are a variety of lean principles that combine to produce a lean transformation. They have names such as kanban, kaizen, 5S and total productive maintenance. However, a lean transformation does not start by picking a lean principle and implementing it. It starts with your customers.

What Does the Customer Value?

The first question to ask is, “What is the customer willing to pay for?” If there is a step in your manufacturing process that is not adding value for the customer, that step should be eliminated.

For example, widgets have always been produced with rounded corners. Everyone has forgotten why, but that’s the way they’ve always been made. But ABC Widgets learned that customers don’t need widgets with rounded corners. So their widgets have square corners, which eliminated three steps in the production process.

On the other hand, growing numbers of customers wanted widgets of various colors. Custom colors added value to widgets that customers were willing to pay for. As a result, ABC Widgets designed an automated painting machine that had multiple paint nozzles instead of one. The new machine could paint individual widgets in any color with no change-over time required and at no more cost than painting them all black.

Introducing Kanban

As ABC Widgets improved its understanding of its customer’s needs, the company implemented a “pull” type of manufacturing system. This lean principle uses customer demand to “pull” products through the manufacturing process. For ABC Widgets, this meant that widgets were not manufactured until they had been ordered. With waste in the production process eliminated, the entire process tightened up to include just those things that added value to the customer. Custom widgets could be produced in less time and with higher quality. Instead of waiting up to a month to get widgets in the colors they wanted, ABC Widget customers could get their custom-colored widgets in just a few days.

The “pull” approach to manufacturing is a lean technique called kanban. A kanban is a signal that a customer order has been received and a product needs to be produced. With Kanban, inventories are reduced (freeing additional space), and products are produced based on customer demand.

Not a Smooth Process

Things did not go smoothly as ABC Widgets moved ahead with its lean transformation. There was confusion, and people tended to go back to their old way of doing things. Problems such as machine breakdowns, long change-over times to reconfigure machines to make different styles of widgets and not having the right tooling available when it was needed were all serious stumbling blocks that looked like they would derail the lean transformation.

Introducing 5S

The 5S lean technique involves getting cleaned up and organized. With 5S, unused tools, equipment and supplies are eliminated or stored in a remote location. Regularly used tools and supplies are stored close to where they are needed in a way that makes it easy to quickly and correctly return them to their proper storage location when they are not in use.

Once ABC Widgets had implemented 5S, some of the problems were cleared up. Clean work areas and machines made it easy to spot oil and fluid leaks. The sources of the leaks could then be fixed long before they became major maintenance problems. Tools and dies could be easily found when they were needed. Employees took more pride in their work and became more supportive of the lean transformation, including contributing their ideas for further improvements.

Kaizen Continues the Improvements

Kaizen is an on-going process of continual improvement. It is based on suggestions from those closest to production, the employees who make the products. However, at times suggestions come from other employees or even from customers. The objective is to continually make small improvements, and it does not matter where the ideas for improvement come from.

Even when everything was going well, ABC Widgets continued to encourage employees to make suggestions, and they acted quickly on those suggestions. Employees could quickly see the positive impact of their suggestions, and when managers thought a suggestion would not work, they quickly followed up with the employee. By talking with those making the suggestions, the ABC Widgets managers learned about the problems and issues that were causing waste or quality problems, and in some cases solutions not envisioned in the original suggestion were developed.

Other Lean Principles

As ABC Widgets continued its lean transformation, other lean principles were applied. Total productive maintenance was used to put preventative maintenance into the hands of the machine operators. Value stream mapping helped identify activities that added value customers wanted and eliminated waste. Poka-yoke was used to reduce the possibility of errors and improve quality.

Visual Communication

Many lean techniques and principles rely on visual communication to keep employees informed about production status and customer needs. ABC Widgets used DuraLabel printers to make custom labels and signs that supported the changes being made. Reminders about the changes were posted where they were most needed. Employees were able to make and sustain the needed changes with minimum disruptions to the on-going production. The signs and labels also provided warnings about new hazards, changes in production configuration and modifications to the facility. With the ability to make custom signs that specifically addressed each situation, ABC Widgets kept its employees informed and safe.

The Lean Transformation Continues

It took ABC Widgets nearly two years to reach the point where it began to take away significant numbers of customers from its competitors. But that was not the end of its lean transformation. As the former industry leader tried to begin its own lean transformation, ABC Widgets continued to apply lean principles and improve further. New widget designs continued to expand the market, and powered by its ever-improving efficiency, quality and innovation, ABC Widgets moved into making related products and serving new markets. It was not easy. The lean transformation never eliminated the need for hard work, but it multiplied those efforts so that the results were much greater than had ever been experienced in the past.

The United States has created levels of wealth well beyond any other civilization in history, yet much further potential is sitting right under our noses. This potential lies in lean thinking; that is, the lean business model. Applying the lean business model across the board would lead to immense productivity improvements and create an environment of deflation (a deflationary economy) and very significant wealth creation. This situation would replicate the near-zero inflationary period the United States benefited from during its first 135 years.

From a historic view, inflation was and remained very small throughout the first century of our country’s existence — even up until around 1910. During this same time, income increased substantially as the country industrialized from both an agricultural and manufacturing standpoint. Much of this was driven during the Industrial Revolution, which significantly increased manufacturing output but also greatly improved agriculture output and efficiencies due to better distribution networks and the ongoing mechanization of the agricultural industry.

During this part of our country’s history, we benefited from what I call quasi-deflation; that is, though prices did not necessarily decrease, they increased at a dramatically low level over the course of many years (in fact, decades), while income that Americans earned increased substantially.

Although deflation is typically viewed in trepidation, in the past it has been— and can be in our future — a truly beneficial function. It may be viewed as price stability, enhanced buying power and value-adding.

Deflation can be defined in two ways: as a decrease in the overall price of goods and services, or as a decrease in the money supply and credit. While the second definition is considered classical economics, this discussion will use the first definition.

Applying lean is about removing waste from the system. By removing waste, work-in-process decreases, productivity increases, lead times decrease, quality improves, and on and on.

To summarize, lean reduces the cost of any product or service by eliminating waste in the development, production and distribution of these products or services. In other words, it reduces cost (notwithstanding the cultural impact and change that must go hand in hand with the cost-improvement aspect). So with all things being equal, if costs of all products and services decrease via the lean business model, that would, in turn, drive prices down over time as well.

Many products and services actually follow this model from a deflationary standpoint. For example, electronics are in a constant state of price decrease while their performance, features and quality are improving. Think of the price of iPads, HD TVs, cell phones and the like. The prices on these products can drop on a monthly or weekly basis. Obviously, improved technology is what drives price reduction in this case, combined with free-market competition. But couldn’t any product have the same pattern if lean was applied? Maybe it would not be as drastic of a price reduction or over such a short timeframe, but there is no reason why eliminating waste (costs) over time in a competitive free market could not have the same effect.

As mentioned above, our country’s history has shown that it can and has happened. Anyone who has been involved with a deep implementation of a lean business model understands the magnitude of waste that infects all business — be it manufacturing, service, government, design or distribution. For as much as we, as a nation, have yet to create, we near equally have yet to improve.

With the current economic environment, there is always the tendency to hunker down and try to ride the storm out, but is that the best strategy? Probably not. While there is most certainly a need to watch budgets closely — even very closely — and tighten up on extra activity, now is the time to take advantage of developing your current employees to help improve your operation and, in fact, all business activities.

A growing number of companies have been actively implementing and even working to accelerate training within industry (TWI) into their everyday business practices and operations. The program is designed to enable improvements every day by everyone, create a more functional relationship between employees and their supervisor (which is translated into solving issues before they become a problem), and getting workers properly trained in record time. To summarize: Save money, leverage and enhance tribal knowledge, and make the entire business enterprise more competitive.

TWI has a long history of success in this area. It also continues to have success with firms today. In the Midwest location where I live, a growing number of companies are effectively using TWI to meet this need. It is giving them a competitive advantage during this critical economic slowdown. These same firms are steadfast in their belief that deploying TWI skills will put them at a significant advantage when the economy picks up. They plan to be well ahead of the game, and they know it.

It is typical that the businesses that did not just hunker down but put into action a plan to develop and improve themselves are the firms that not only survive through difficult periods but thrive when better times return. Many organizations are doing this right now, and my guess is that they will be reaping the rewards in the months ahead as business improves.

In many of the articles and books that are published today, you see a great amount of improvements in different industries and companies using different methodologies and tools. Many times, it seems that if you use the same tools or methods, you should get the same results. This is not always the case. There is an underlying story to a company that excels in making improvements, and that is the culture. I like to think of the culture as the foundation to effective operation. Culture includes leadership, initiative, teamwork and all of the other nice words we love to throw out.

Webster’s defines culture as “All the knowledge and values shared by a society.” I think the interesting part of this definition is the word “shared.” Have you ever been in an environment where everyone seems anxious to pitch in and volunteer to help others accomplish projects? In these instances, everyone has a shared sense of purpose. Each one of them probably knows how important the others and their work are to the overall goal of the organization. This is a powerful motivator, especially when you understand that the success of the organization relies on everyone. That is also why organizations that are not doing very well normally have a very compartmentalized culture in which individuals think that others are “not important and should go away.”

I am not going to give you some five-step process that will change your culture. What I will give you is a simple piece of advice, and that is to start communicating … always. What I am saying is to communicate with everybody you deal with and find out what drives them to do what they do. Shared goals and attitudes are not just sent down from above but are also influenced by the bottom or true foundation of the organization. Remember that the deck-plate workers are the value-adding group for the company and should be a driving force to set the culture and communication. The management and executives should be the facilitators of the communication process.

You will find that programs and projects will start to run more efficiently, and the atmosphere will change. Keep in mind that this is not an overnight change. It will require time and hard work. Take a bite of humble pie and start talking to those around you to find out what their purpose is and where their beliefs are leading them.

When I speak with people about lean programs, I am often asked, “Where should we start?” While this sounds like a very simple question, it actually requires a lot of thought. However, the simple answer is that you must first decide where you want to go before you know where you should start. Think of it like this: If you don’t know where you want to go, it really doesn’t matter if you have a map.

I have a GPS for my car that is pretty high tech. It communicates with satellites that are orbiting thousands of miles above the Earth. It can tell me where I am, give turn-by-turn directions and even tell me how fast I am going. However, with all of that technology, if I can’t put in a final destination, the GPS is pretty useless.

The same is true for lean, Six Sigma or whatever you call your continuous improvement effort. If you don’t know where you are trying to go, even the best lean program won’t help very much.

There are mountains of information about lean, Toyota Production System and continuous improvement processes out there. You can read a book a week and probably spend several years digesting just what is written so far. You can hire consultants that can make a process lean down to the tenth of a second, and you can save a lot of money. You can even teach some of your folks about lean in the process. All of these things are good, but they are not exactly what we are after – yet.

You should be able to very clearly and concisely articulate where it is that you want to go before you do any of the things above. If you can’t, don’t start yet. When I ask this simple question to companies and people who ask me where to start, I usually get answers that may be accurate but aren’t really relevant. For example, I might hear, “We want to get better,” or “We want to save money.” While those are things that lean can help achieve, neither of them paints the picture of what you want to accomplish.

You can read some of the lean and continuous improvement books out there to get ideas or talk to some lean professionals, but be sure to decide where you are trying to go before you take off on the trip.

Some of the things you should consider include:

  • Have I gone through a reorganization that could impact the quality of my product?
  • What is changing on the economic landscape that will impact my business?
  • What new changes may come out of the new government administration that may change my business model?
  • What opportunities are being created right now that we need to capitalize on for the future?
  • Are there “green” issues that may impact my business?
  • How old is our product offering, and is it time for a review?
  • What is the long-term corporate strategy, and how do we support that strategy?

There are many other items that you should discuss as a team when formulating the vision for a continuous improvement program. The point is to have the conversations now before you start. Make sure you can clearly and consistently articulate what is important and what you are working toward.

The challenge for you and your colleagues is to be able to clearly answer and articulate an answer to the question, “Why are we embracing continuous improvement?”

Some organizations misuse lean manufacturing to overwork their people and reduce the overall headcount. Let me assure you, I am not arguing to counter the fact that reducing the headcount will save money. In the short term, it most certainly will save money. However, this savings in headcount does come at a higher cost.

Some managers seek headcount reduction because they associate fewer operators with increased profits. Elimination of headcount for this reason yields poor results culturally. Soon after launching a campaign such as this, operators will start noticing the intent. This will lead to a refusal to cooperate with any effort to reduce headcount and eliminate waste. The operators will begin to adamantly resist changes because they are fearful of losing their job or possibly contributing to a co-worker’s job loss. This causes stress, conflict, finger-pointing and eventually failure. It also is a surefire way to pit management against the shop floor.

This lack of understanding has left a bad taste in the mouths of many shop-floor employees. Likewise, it has also contributed to the conquered attitude of many managers because they have attempted this idea of “lean” before and have seen it fail more than once. Understandably, this is frustrating for an organization’s leadership as well as other employees. It leads me to the opinion that the word “lean” might need to be eliminated from our vocabulary for a while.

I think when leaders focus strictly on lean, they inevitably pay more attention to the dollar numbers. While the dollar numbers are important, there are other elements within an organization that far outweigh the bottom dollar. These elements create awe-inspiring flexibility and opportunity for future growth.

It is my opinion that running a lean organization is actually a by-product of continuous improvement philosophies, specifically the eighth waste (the underutilization of people). When you focus your sights on lean and lean alone, you miss the bigger picture.

Culture is the most important element to true organizational maturity. Continuous improvement philosophy zeroes in on culture and stimulates its development. It focuses on the people and the success of the organizational team. Its aim is to involve and engage everyone, to educate and stimulate the minds of the people. It requires everyone’s engagement, commitment and trust. It is culture-driven and will help the company grow. Everyone should be trained on its ideals and philosophies.

For a continuous improvement organization to truly thrive, leadership must dedicate at least 85 percent of its efforts to the development of employees through training, influence and on-the-job training. Once the majority of the organization understands the philosophy, then and only then is it time to move to the next step. Otherwise, resistance will remain at the doorstep.

The remaining 15 percent can then be dedicated to implementing the actual tools and engaging employees. The employees who own the process need to make the change and improve the organization’s current state (with assistance from a trained continuous improvement team). Not only does this encourage them, but it also increases the chance that the change will be sustained and standardized.

Organizational leaders must be careful not to put the cart before the horse. You cannot simply read or benchmark other companies and expect that you can photocopy that into your organization. You have to remember a picture is a snapshot in time. It does not tell you what happened before the picture was taken. Therefore, you cannot expect significant, highly profitable changes too fast. You cannot have the baby without going through the labor, so to speak.

Let’s get “lean” out of our heads and start communicating continuous improvement. It’s broader, includes everyone (especially the operators) and will provide long-term gains. Let’s train, empower and engage rather than cut time, rebalance and lay off.

Training within industry (TWI) is a micro version of “creative destruction,” a term used by some economists to describe a free-market capitalistic economy. Creative destruction means that new businesses, services or products enter and create the new markets while destroying existing ones — with the overall result being beneficial. Kaizen is no different, but on a smaller scale. However, let me explain this in the context of TWI.

I have done much research on industrial history, specifically on Toyota, Ford’s Highland Park plant, the development of accounting and TWI. In every case, many mistakes were made during the development of all of these ventures. Kiichiro and Sakichi Toyota, as well as Taiichi Ohno, all made some very costly mistakes from financial and personal perspectives, but the difference is that they eventually learned from these errors. They also never let these errors bring them down.

Learning is suffering; it is not necessarily a victory — although that does happen at times. The deepest learning comes from suffering (i.e., failure). That is the philosophy behind PDCA (plan, do, check, act).

By its nature, PDCA is an acknowledgment of failure, but a victory in learning. If PDCA was a victory, you would never have to go back through the cycle again, but that is exactly what PDCA intends for you to do.

This is the process of learning: fail, learn, fail, learn, and so on. This will equate to a victory. The failure I am referencing does not mean that things go bad necessarily; it means that a solution or a perfect solution is not reached. This is why Toyota frequently refers to these “tries” as countermeasures. Counter the problem and measure the results, then try again and again (PDCA). This is creative destruction at a micro level and is exactly what TWI manifests both mechanically and philosophically.

TWI is a mechanical means to achieve creative destruction — a pattern for a behavior. Once this behavior is learned and made into a habit, deep learning takes place and the learning cycle can become addictive.

How does TWI achieve this? It goes back to the old analogy frequently used by Toyota folks using stair steps. “Job methods” is the vertical part of the step — the change, the improvement, the experiment, the “try.” “Job instruction” is the horizontal part of the step or the stabilization of the experiment upon some level of success (or learning). “Job relations” provides an environment where both the leader and the subordinate feel comfortable and confident to work together through this change-stabilization process, which is a learning process running parallel with the mechanical PDCA process of the stair steps.

In this manner there is a symbiotic relationship between the organization and its people. The company gets an improvement (better performance). People get to contribute in a meaningful way and grow in knowledge and experience. Then each gets to further leverage the mutual benefit over and over again, thus making both better through many tiny creative-destruction cycles.

As I’ve gotten older, I have tried to curtail my consumption of fast food. I’m aware that the fat content, calorie counts and general nutrition levels are not the healthiest available. I know that as we age, we should watch our cholesterol, our weight and make sure that we eat healthy. I also know that my diet will directly contribute to the length and quality of my life. With all that being said, I love fast food. I am usually pretty good at keeping a balance of healthy eating and not-so-healthy eating, but sometimes I just want something that comes quickly and cheaply even though it may not be the best thing for me.

I recently decided to partake in some fast food in spite of the long-term potential health consequences. As I was standing in line reading the menu, I was watching the processes behind the counter. This particular restaurant was moving like a choreographed dance recital. It appeared that each person clearly understood his or her purpose and was executing flawlessly.

All too often, however, fast-food restaurants are rather hit or miss. You never know exactly what sort of food or service you may receive. In some cases, the employees move slowly, while in other cases, they may move quickly. Sometimes the food is hot and fresh, and sometimes not so much. Sometimes you get the feeling that the employees could not possibly care less about serving you, while others are courteous and concerned professionals.

One of the challenges of fast-food chains is to drive consistency. In fact, this is a key challenge in all businesses. Consistency will drive customers back to us, while inconsistency will drive them away. Whether we are serving cheeseburgers, small electronics or large engineered systems, our customers want us to be consistent. They want to know what to expect from us, and they want to know that they can count on us. They want us to do what we say, not surprise them, and deliver high-quality products and services. It is up to us to build long-term processes that drive consistency and build that confidence in our organizations.

So, if you have occasion to visit a fast-food establishment, or any restaurant for that matter, watch the processes if you can to see what they are doing. Try to see where things are located, how they are marked and how each process is defined. See if there is something you can learn from your favorite eatery — especially if they are good at value delivery.

Many companies say they use standard work, but actually very few do, at least in the manner that leverages lean fully at the operational level. Instead, most firms use some form of work standards or work instructions.

Frequently, plants trying to implement standard work within their lean efforts will do a kaizen event and, in the process of the event, develop and post a number of forms (standard work sheet, work combination sheet, etc.) at the new cell. This action is a helpful exercise for the event but does not really help with the performance and sustainability of standard work or the continual performance of production.

The problem is that while the operators may be referenced to the new forms (this is what you need to do) or even involved in the development of the forms (the new standard work) during the event, this is not the purpose of the standard work forms or standard work for that matter. The forms are only for management, not the operators.

This is where training within industry (TWI) comes in. The operators are not going to be able to perform the new standard work (even if they were involved in the development of the forms) to the level that it needs to be — to maintain the necessary stability of the process. TWI’s job instruction tool is the source of training operators in order to properly and consistently perform to meet the standard. TWI’s job method is the source of continuous improvement needed to develop and implement improvements to the process day in and day out. Traditional methods of training will not cut it.

The development of good standard work is a fundamental key to lean success, but unfortunately most companies do not go nearly far enough with their effort — and, in most cases, are not even aware of this. If you want to be successful with standard work, there is no alternative other than TWI.

In a competitive, global marketplace, customers are more attuned to quality craftsmanship than ever before. They are also more likely to value corporate stewardship, especially as it relates to sustainability. In a March 2012 Nielsen survey, two thirds of customers responded that they would prefer to buy products and services from companies that contribute to the good of society, with environmental stewardship highest on the list. Nearly half of the respondents said they would pay more for such products and services.

Industrial facilities can achieve dramatic improvements in both quality and sustainability by adopting a single process methodology: Lean Six Sigma. However, personnel in many enterprises — from executive decision-makers to plant employees ― often misunderstand both the concepts of Lean Six Sigma and the realities of its implementation. In this article, we will look at these issues and make recommendations for how plants can reap substantial rewards from a Lean Six Sigma program.

What is Lean Six Sigma?

Lean Six Sigma is a conceptual framework that, when properly implemented, extends far beyond the plant floor and into every aspect of a company’s business. It combines two of today’s most influential trends:

  • Improving quality, as measured by eliminating defects and process variation, increasing predictability and consistency, and focusing on those products/processes/services that the customer values most (Six Sigma).
  • Reducing waste, as measured by eliminating or drastically reducing unnecessary motion, transportation, inventory, processing, production and defects (Lean).

Lean Six Sigma represents a perfect union of two beneficial practices. Although the concept of Lean encourages a sustainable, cost-effective outcome, it does not by itself provide the necessary process-improvement strategies to achieve these goals. Six Sigma is all about process improvement, with benchmarking (measuring to establish performance-improvement targets) and prioritizing (determining which process improvements will yield the greatest results) adding value to the effort. In other words, with Six Sigma, a company can determine exactly how many problems, defects, flaws, inaccuracies, etc., are occurring and then provide a systematic methodology to eradicate them.

DMAIC

Lean Six Sigma involves five distinct steps: define, measure, analyze, improve and control (DMAIC). Although many facilities pay lip service to these steps, the ones for whom Lean Six Sigma provides the most improvement truly embrace them throughout all levels of operation. DMAIC isn’t a magic recipe for achieving the benefits of Lean Six Sigma as a whole. Rather, it is a formula for achieving the incremental benefits that eventually lead to quality and waste reduction, using a continual feedback loop to refine processes in pursuit of excellence.

While Lean and DMAIC may drive improvements on the plant floor, the principles are applicable in every corner of the organization. The excellence that Lean Six Sigma fosters permeates the entire operation, and both customers and employees are happier as a result. This fact is evidenced by the many companies ― from accounting firms to human resources agencies―who are achieving dramatic results with Lean Six Sigma. These are 100-percent service firms without a single product to design, engineer, build, finish or repair, and yet Lean Six Sigma is meaningful to them.

Keys to Achievement

For many companies and facilities, the breakdown of Lean Six Sigma occurs when management or personnel misunderstands its intrinsic nature. Lean Six Sigma isn’t an approach that an organization adopts purely to save money. It also isn’t a sprint or a cookie-cutter solution that works the same way for everyone. Lean Six Sigma requires substantial discipline and governance. In order to prove its effectiveness, it must produce results that can be validated, whether that validation comes from the finance department documenting monetary savings or the service department being flooded with positive customer feedback.

The importance of dedicated effort and patience in a successful implementation cannot be overstated. A few of the core requirements that are necessary for companies or facilities to succeed with Lean Six Sigma include:

  1. Organizations must have a compelling reason for implementing Lean Six Sigma.
  2. Senior management must be 100-percent invested in and committed to achieving Lean Six Sigma.
  3. Companies must be willing to invest in appropriate, qualified resources for the initiative, whether those resources are employees, materials, technologies or a combination.
  4. Stakeholders and participants must work together as a team.
  5. Team members must be empowered to carry out initiatives without the need for constant evaluation and approval.
  6. Organizations must commit sufficient time and resources to training, which is crucial to achieving a positive outcome.
  7. When companies work on their priorities, they should focus not on improvements that create change quickly but rather on those that have the most impact on quality.
  8. The feedback loop is pivotal to incremental and long-lasting improvement and cannot be bypassed.

Why Lean Six Sigma?

If Lean Six Sigma requires so much dedication and effort, why go through it at all? The answer is that the rewards can be truly amazing. Some of the positive improvements that facilities see when adopting this approach include the following:

Value for the Customer

Lean Six Sigma leads to improved service, delivery and quality, all of which creates value for customers and drives business to the company’s door.

Increased Workforce Productivity and Morale

Not only does the process improvement from Lean Six Sigma increase productivity, but surveys show that it also boosts employee attitudes and satisfaction within the workplace.

More Fluid Strategic Positioning

Lean Six Sigma operations are more nimble and flexible regarding changing conditions, enabling them to adapt more readily to unanticipated changes in the business or economic climate.

Stronger Competitive Stance

Customers, vendors and partners are drawn to the type of excellence and success that Lean Six Sigma operations exhibit, making these firms more competitive in all aspects of business operation.

Standards-Driven Achievement

When processes are standardized, personnel training, project management and monitoring, problem-solving and other aspects of corporate operations are simplified and streamlined.

Better Innovation

When personnel and management aren’t constantly solving problems and/or surmounting challenges, it opens the way for more innovative and imaginative thinking.

Healthier Bottom Line

Not only do lean operations save money in terms of reduced waste of all types, but greater customer satisfaction and fewer returns result in higher profitability. Numerous companies have documented annual savings from Lean Six Sigma initiatives that range from $2,000 to $250,000 (and higher) per improvement, and those figures don’t include the added value of increased sales, enhanced reputation and expanded customer goodwill. The U.S. Army reports that its Lean Six Sigma initiatives have resulted in savings approaching $2 billion.

I recently came across a blog about an executive who switched from his traditional “sit-down desk” to a “stand-up desk.” He remarked that he loved the new desk and would never switch back. This concept fascinated me, and I investigated it further. I read many articles, and as it turns out, Thomas Jefferson, Ernest Hemingway, Winston Churchill and many others worked on their feet up to 10 hours a day.

I was inspired by this dynamic concept. I thought if implemented creatively, it could benefit supervisors on the production floor. But I knew this was an unusual concept that wouldn’t be accepted right away. I had to try it for a period of time and demonstrate the possibilities. After all, if the stand-up desk is good enough for a man as intelligent as Thomas Jefferson, it’s worth a shot, right?

I wanted to approach this experiment as I would if I was manipulating an assembly line or fabrication operation. So, before building the stand-up desk, I asked myself some very basic questions, many of which were based on 5-S concepts.

What do I have at my sit-down desk that I can do without? (Seiri: Go through all materials and keep only essential items.)

What can I do to ensure that everything has its place? (Seiton: There should be a place for everything, and everything should be in its place.)

What do I need to keep the desk area neat and orderly? (Seisō: Keep the workplace clean and neat.)

What is my standard work? (Seiketsu: Work practices should be standardized.)

How much space do I really need?

What do I need to maintain comfort while at the desk?

As I built this desk, I did many trials and errors before bringing it to the floor. It took about a month of contemplation and modification before it was ready. Once I had exactly what I felt could be trialed safely and comfortably, I made the leap and trashed my traditional desk.

During this experiment, I’ll confess I was laughed at and labeled as “wacky” by some of my colleagues. That’s OK; a lot of new ideas receive ridicule before respect. I have now had my new desk for four months, and in my opinion, the stand-up desk has many positive benefits when looking at it from a lean perspective.

 It is structured much like an assembly operation should be structured. I only have what I need when I need it. All of my materials (phone, keyboard, monitor, files, trash can, etc.) are within arm’s reach. I only have the amount of flat surface my standard work requires. On the occasion that I need more surface for writing or reading (periodical work), I have a flat surface that rolls out from under the desk and can be returned easily. As you can see from the picture, there was a lot of space saved as well — 50 square feet.

Everything on my desk has its place. Due to the fact that there is only enough space for what I need, it is very easy to keep neat and organized. I no longer have stacks of random papers, Post-its or folders. On the right side of my desk, I have a broom holder and a dustpan to sweep my area at the end of each day. This task is much easier with the new desk because the floor is visible and accessible on all sides.

As far as ergonomics, it is very comfortable. In my research, I found that when standing, it helps to elevate a foot to relieve strain from your back, so I added a bar across the bottom 6 inches from the floor. I also have a padded mat in front of the desk to reduce any fatigue. The monitor is at eye level, and the keyboard is at a height that would suit most individuals.

I have found that this desk has made me more active and I feel more energetic. Instead of “zoning out” in front of the computer (which occasionally happens to everyone), I do what I have to do and move on to more activity. This has led me to have faith that this would be a good cultural tool in the manufacturing community when linked with other lean measurements.

I have to assume that organizations (including mine) will not make this a standard, but if they do, great. If you decide to try it, remember that it is hard for the first week or so. Just maintain and focus. It gets better.

I highly suggest that continuous improvement-driven individuals give this a try. I have read very little negative comments from the people who have tried it. I believe this can help standardize a supervisor’s work, promote dynamic 5-S practices and result in elevated activity on the manufacturing floor. It’s an idea that has been beneficial personally and has the potential to be beneficial to others.

From two-slice toasters to Boeing jets, tools and dies are omnipresent staples of manufacturing. And where there are tools and dies, there are idle employees and lost production.

Three decades ago, Dr. Shigeo Shingo largely solved this problem with the introduction of single-minute exchange of dies (SMED), a lean manufacturing technique that was designed to reduce the amount of time tooling and die changes require. Single minute means a time period less than 10 minutes – a single digit time. Dr. Shingo’s innovations continue to influence the automotive and other industries.

“SMED is about an attitude of continuous improvement and never becoming complacent with the status quo,” said Bob McClintic, aka Dr. Die Cast.

Tools, dies and molds are fundamental to manufacturing. Tools are used to cut and form metal and other materials. Dies are metal forms used to shape metal in stamping and forging operations. Molds, which are also of metal, are used to shape plastics, ceramics and composite materials. Both low-pressure casting and high-pressure die casting use steel molds called “dies” to produce products from automotive transmission cases to aluminum wheels.

In manufacturing, tooling and die changes take a considerable amount of time. While tooling or die changes are being made, production lines are shut down. This lost time and associated costs must be covered. In addition, the downtime for changing tooling or dies impacts other production decisions. The more time it takes, the longer the production cycle. Operations personnel increase lot sizes and run longer in order to reduce the impact of setup costs.

There are advantages to having the tool or die out of the machine that may not be apparent at first glance. Molding, stamping, tooling and cutting are processes that all produce soils that can affect the performance of the process to make quality parts.

According to Mike Bangasser of Best Technology Inc., the best time to clean the die or mold is as it comes out of the machine – not when it’s time to try and re-install it in the machine.

“Cutting fluids, slag and other particulates will accumulate on the surface of a mold or die during normal usage,” Bangasser said. “Especially in precision applications, it’s critical to clean the working surfaces to ensure correct tolerances are maintained. Letting soiled tooling sit on the shelf is like letting your dinner dishes, pots and pans sit on the counter overnight before trying to wash them.”

Tool and die companies, which are typically small businesses staffed by skilled craft workers, make it possible for their customers to manufacture innovative products, from auto parts to household appliances to fighter planes. High-volume tool and die shops have incorporated “quick change” fixtures to reduce setup time and increase accuracy between machining processes. However, when tool and die changes take too long, complications arise including higher manufacturing costs, lower quality levels, excessive test runs and pulling people off task to find tools.

By contrast, employees at Honda’s plant in Anna, Ohio, are superstars when it comes to die changes, completing changes on 3,500-ton die-casting machines in 15 to 20 minutes. Others may take up to four hours to perform similar operations.

SMED Benefits

With SMED, increased production is achieved without purchasing new equipment or hiring additional employees. In addition, because SMED reduces the number of items that must be produced in a production run, the production line becomes available to produce other products.

Because SMED reduces changeover time, it becomes economically possible to have smaller production runs. This provides a number of advantages:

  • Less capital is tied up in inventory and less warehouse space is needed.
  • Less work in progress, reducing costs further.
  • The ability to quickly respond to market changes.
  • Improved quality and less waste. Defects can be identified and the problem corrected without large quantities of defective product being carried in inventory.
  • Product innovations, which provide a competitive advantage, can be brought to the market sooner because inventories of the older product are smaller and will be sold off quicker.

Using SMED to reduce changeover times provides a number of benefits that go right to the bottom line, including improved productivity and greater equipment utilization, but don’t expect instant results.

“These kinds of transformations take planning,” said Steve Udvardy, director of research, education and technology for the North American Die Casting Association. “Management is often so busy putting out fires with concerns about detracting from what we’re doing today to think about how these kinds of shifts will make things better in the long run. Upper management has to set the tone, take the time and discipline to help foster a culture of change and understand that there will be hurdles to overcome.”

Not Just Tools and Dies

Being in tune with SMED is like being the producer of a Broadway show. As everyone knows, the show must go on – even if you have to rehearse a few times with all the necessary players and tools in place.

“The No. 1 way that SMED changes human behavior is making one more conscious of waste,” said Udvardy. “If you can reduce eight turns of a wrench to one-quarter turn of a wrench to tighten a clamp, then that’s progress.”

SMED Tools and Visual Communications

Clear communication is vital throughout this process. Checklists should be provided to ensure everything is ready before the shutdown begins. Procedures should be readily available. Safety warnings and information must be prominent, including labeling tooling as “ready to set” or “not ready, work order incomplete.”

Utilize your smartphone’s video camera to record details of all team and changeover activities. Capture activities from both the operator and helper sides of the machine. Record elapsed time. Install a sign that reads, “Cameras are recording work for learning purposes.”

Use stop watches to record incremental changeover activities with timelines. This information can be helpful to measure time data in the changeover process with people, machines and equipment.

Create and post charts and graphs for recording data. Correlate activities on the Y axis and incremental time/elapsed time on the X axis. Capture data from team members and record and share with your teams. Break activities into specific actions/activities such as “move die to machine, align die with keyways, clamp die, connect hydraulics and/or electrical switches.” This will allow you to identify areas where more practice is needed or simpler methods can be developed.

Like all changes, achieving proficiency in SMED is not an overnight exercise. Instead, expect a few bumps in the road. Training, review and reassurance will be necessary. Help your team by communicating key SMED messages with signs and labels that can be easily updated and relocated.

Electric motors are essential to numerous plants operations, no matter the industry, which is why understanding their 50 failure modes can help you develop a better maintenance program in your plant.

Electric motors are essential for making sure that plants are running smoothly and effectively. If one fails, it can mean costly downtime for the plant and create a variety of safety hazards. There are a number of different failure modes out there, so by understanding them, the lifespan of a motor can extend from two to 15 years.

The key is moving from the reactive category of the PF curve to the predictive phase. By using ultrasound technology, such as the Ultraprobe 15,000, you can detect problems before they start to create serious damage in the motor. Because there are so many different components within a motor, a failure mode can emerge in a variety of places. There are between 8 and 10 components within a motor, each with its own failure modes, bringing the total to around 50, so by properly addressing them, you can greatly extend the life of your motor.

Motor housing
Failures in motor housing can crop up from improper installation, physical damage, corrosion and material buildup. While motor housing may not seem like a true performance component, these shortcomings can ultimately affect the way others perform.

For instance, a soft foot could lead to bearing failures, shaft bending and broken or cracked feet. This emerges if a motor, when placed in a flat surface, does not have all its feet flat on the surface. Material buildup can heat up the operating temperature of the motor, ultimately leading to damage on other parts of the motor, such as bearings.

Motor stator
Motor stator failure modes emerge from physical damage, contamination, corrosion, high temperature, voltage imbalance, broken supports and rewind burnout procedures. A lot of times, these can emerge from motor repair shops.

Stator failures occur due to the rewind burnout of the windings. This often happens before the motor can be rewound requiring emergency repairs. But because the plant will need the motor returned as soon as possible, hasty repairs can end up damaging the stators by improperly heating the housing and the stator. This can also lead to motor inefficiencies.

Motor rotors
Rotors are composed of numerous layers of laminated steel and the rotor windings are composed of bars of copper or aluminum alloy that is shorted on both sides with shorting rings. These components can then fail through thermal stress, physical damage, imbalance, broken rotor bar, contamination and improper installation.

Physical damage on rotors can develop after certain emergency maintenance tasks including bearing replacement, motor rebuilds and during a disassembly and reassembly process. Generally speaking, motor bearings should not be changed at plant locations and especially on critical equipment.

Imbalanced motor rotors are common, but this can put a lot of strain on bearings. This will ultimately lead to a rotor making contact with a stator and creating another point of failure. Again, improper rebuilding tactics, such as overheating, can damage rotor components as well.

By establishing precision balance standards, you can be sure you are preventing these kinds of imbalance failures.

Motor bearings
Motor bearings within an electric motor can emerge from improper handling and storage, improper installation, misalignment, improper lubrication, start/stop processes, contamination, overhung loads and motor fan imbalance.

Contamination is one of the biggest reasons for bearing failure modes. This occurs when foreign contaminants or moisture enter the bearings, usually during the lubrication process. You can take steps to prevent contamination during the regreasing process to ensure that they are kept out.

It is also important that your motor is properly outfitted for the task for which it was selected. This means using the right bearings for its application. Motors that are using sheaves or sprockets that are mounted on the shaft will need roller bearings in the motor, which are common among most standard motors.

Lubrication can always be a major cause of failure because there are so many different places where one can improperly apply lubrication. Too much or too little lubrication, along with the improper form of lubrication, can lead to premature wear and tear. All motor greases should be polyurea-based, and not all purpose greases. One should always take the plug out of the bottom so that old grease can be drained properly. Also, release valves can help prevent over greasing.

The UE Grease Caddy can be a great tool for listening to when lubricating a motor.

Motor bearing seal failures tend to emerge from improper lubrication or installation.

Motor fans
Motor fans tend to fail from physical damage, ice buildup, foreign materials and corrosion. Fans help keep the temperature down on a motor, which is essential to making sure that the rest of the components are performing well.

The motor fan guard failures can also lead to a larger motor failure. This tends to happen through physical damage and plugging. By taking the time to keep them clean, you can go a long way in preventing fan guard failures.

Motor insulation and windings
When it comes to motor insulation and windings, there are a number of potential issues. Contamination and moisture can lead to winding failures. Often times, this is because they are not stored in ambient areas. Overheating is another issue that can cause a motor failure. Insulation breakdown, cycling and flexing, along with AC drive stress, round out the possible failure modes for this category.

The life of the insulation in a standard electric motor is based on the temperature at which the motor operates. This means for an electric motor that is operating at a particularly high temperature, you could be cutting back on its lifespan. In fact, for every 18 to 20 degrees Fahrenheit, the insulation life is cut in half. While better insulation can extend the lifespan, temperature is easily one of the biggest factors in this instance. This means bringing in cooler outside air.

Insulation breakdown can be a big problem, as it will cause windings to short out. These problems can be detected through MCE testing and thermography. Winding shorts from turn to turn can crop up from contaminants abrasion, vibration or voltage surges.

Cycling and flexing is another problem that typically occurs from frequent start and stop operations from the motor. This kind of an operation cycle can lead to the frequent heating and cooling of windings and insulation, which can lead to wear and tear, such as holes, ultimately leading the motor to short and fail.

Motor shaft
Motor shaft failure modes occur due to physical damage, improper manufacturing, improper installation and corrosion. For instance, installing a motor improperly can cause certain components, such as the motor casing, to corrode and create imbalance.

How to make your motor last
Now that we are aware of the various types of motor failure modes, we can take better steps toward creating a preventive maintenance plan.

Many maintenance tasks can be addressed through a weekly hands on inspection. Make sure to grease the motors as needed with the proper motor rated grease. Add grease or oil only when needed.  Incorporating an ultrasound assisted lubrication program can go a long way in preventing bearing failure.

There are a number of ongoing tasks you can do to ensure that motors are in their best performance conditions. Keep your motors clean and at the proper temperature with consistent airflow, and store motors properly to keep moisture from contaminating them. Also, keep moisture and chemicals away from the motor so as to prevent contamination.

There are also a number of precision maintenance steps you can take in order to enhance the performance of your motors and reduce wear and tear. Always align your motors to under .003 in all three planes, while also taking care to eliminate soft feet. Specify the precision balance of the rotor to .05 in per second. Finally, only use certified motor rebuild shops, because as we discussed earlier, improper repairs can lead to greater damage down the line.

In terms of predictive maintenance measures, use motor circuit evaluation to detect all motor failures. Vibration analysis can be used for a number of other motor failures while mechanical ultrasound can be used for bearings, rotor bars and electrical failures – also use oil analysis on sleeve bearings with oil reservoirs.

There are a number of other ultrasound applications as well. Failures tend to first appear in bearings, meaning that the Ultraprobe 15,000 can be a great way to detect Stage 1 failures. The device is also great at detecting over or under lubrication. As ultrasound becomes an increasingly integral part of maintenance operations, so too are its applications. It can be used to detect electrical failures like arcing, rotor bar problems and rotor imbalance, along with alignment and soft foot issues.

Generally speaking, when a motor fails, you need to decide if it is worth rebuilding or buying a new motor. Using a motor decision flowchart can help guide this decision. Talk with a CMRP to find a decision flow chart for your operations.

Finally, you can get a lot more out of your motors by taking proactive maintenance steps. Purchase precision motors for all of your critical applications and always use precision maintenance for installation, alignment, balance and lubrication.

By adhering to these steps, you can extend the lifespan of your motors and limit downtime in your plant, effectively speeding up operations, limiting cost and improving performance.

Managing a work order backlog is not the most exciting of maintenance tasks, but without a complete and up-to-date backlog, important work will be forgotten. Indeed, good backlog management is a prerequisite for effective planning and scheduling.

Defining a Backlog

“Backlog” means different things to different people. There are two common definitions. The first and most common is that a “backlog” is a list of all work that has been approved and will eventually get done. This is the correct definition. It is sometimes measured in trades-hours, but it is better measured in weeks, calculated as the time it would take to complete all the current work in the backlog with the resources that could be applied to this work. This may or may not include PM work.

The second definition is that a “backlog” is just those work orders that have passed their “required by” date. This definition should not be used because it is not logical. Most maintenance departments have a reasonably fixed number of tradespeople who perform work from work orders generated more or less at random.

When a work order is initiated, the date on which the work will be completed depends on its importance relative to the work already in the backlog, which is known, and also the work orders that will be generated in the future, which are unknown. The result is that any “required by” date assigned when a work order is initiated will be just a wild guess and usually wrong. Assigning a “required by” date should be limited to those few work orders that have a genuine deadline. Otherwise, these dates will be in conflict with the objective of always working on those jobs that have the greatest value at any time.

In this article, the first backlog definition will be used. Within this backlog of work orders that have been approved but not yet started, there are sub-groups. These include the “planning backlog,” which can be defined as all work orders on which any commitment such as purchasing has been made, and the “ready-to-schedule backlog,” which is made up of those work orders for which all materials and other resources are available so work could start at any time.

Backlog Filtering

Combining all approved work orders into a single backlog can be overwhelming. Instead, it should be filtered into logical components. The following filters are recommended:

Shutdown Work and Non-Shutdown Work

Shutdown work must obviously stay in the backlog until the appropriate shutdown is scheduled, which may be a year or more. Leaving this inactive work in the backlog complicates the management of ongoing non-shutdown work, so it should be hidden until the time comes to prepare for the shutdown, when it will be managed on its own. Of course, the preparation work for shutdowns is very important and should be prioritized along with all other non-shutdown work. Separating shutdown and non-shutdown work is also necessary for efficient shutdown planning.

Mechanical and Electrical Work

This also would include work for all other categories of maintenance resources, such as area maintenance crews. Remember, the backlog for a maintenance crew should be limited to the work for that crew and must include references to the support required from other crews.

Preventive Maintenance and Corrective Maintenance Work

Preventive maintenance work should be pre-planned and pre-scheduled. The instructions for inspections and other routines should be on file and included in preventive maintenance (PM) work orders. The work should be automatically scheduled by the maintenance computer system. Of course, PM work and corrective maintenance require the same limited trades resources and need to be scheduled together, but for the purposes of backlog management, they can be separated. Backlogs are more easily managed if PM work is hidden until the time comes for it to be scheduled.

PM work should be set up in the maintenance computer system so there is a steady workload scheduled for each work day. This allows the manpower assigned to PMs to be constant and considered “untouchable” for corrective maintenance. This way, the scheduling of both preventive and corrective maintenance is simplified.

Backlog Cleanliness

Considerable discipline is needed to limit work in the backlog to just those jobs that will be completed in the near future. Backlogs should never contain completed jobs, duplicated work orders or low-priority work that no one ever intends to do.

An important part of the discipline for maintaining a clean backlog is to close work orders (or change the status to “physically complete”) as soon as the work is done, which should be the same day for non-shutdown work and within a few days for major shutdown work.

The function of maintaining a clean backlog should be included in the job description for a designated maintenance position. It is one non-planning function that is appropriate to assign to a planner. It does not take much time, and the planner is in position to know the status of all work orders in the area.

Backlog Size

There is an optimum size for a non-shutdown work order backlog. If a backlog is too small, it will be difficult to keep tradespeople on priority work. Break-in and unplanned work will increase, and productivity will fall.

If the backlog is too large, a lot of material may be tied up and the backlog will be difficult to control. There will be a loss of confidence that work will be done, and “emotional emergencies” will be encouraged. It can even become easier to submit a new work order than to try to find an existing work order in a large backlog. A large backlog is of little help for work scheduling.

Ideally, the backlog should be of such a size that key maintenance and operations personnel, including the area maintenance supervisor, the operations coordinator and the planner, have a good enough “feel” for what’s in the backlog to be able to immediately recognize duplicate work requests.

For a typical 24/7 continuous-process operation, a good starting objective would be to have a “total backlog” of about four weeks, a “planning backlog” of about two to four weeks and a “ready-to-schedule” backlog of one to two weeks.

Note that a “ready-to-schedule” backlog of one to two weeks implies that all the material for this work should be staged somewhere onsite and be ready for use. This kind of materials management has great benefits but will be successful only if a large percentage of the work on the schedule is executed according to that schedule.

Adding Work to the Backlog

Any work added to the backlog should receive some scrutiny from the area decision-makers. A good process is for the area maintenance supervisor and the operations coordinator to review all new work requests each morning. One important function in this step is to decide whether the request is for a “small job” that can be completed immediately and does not justify being planned or scheduled. Allowing anyone to add work to the backlog without review guarantees that it will become disorganized and of little value.

Backlog Management Software

Ideally, your maintenance management system should be used for backlog control, but unfortunately many systems have very weak functionality in this area. Managing a backlog is all about sorting and filtering lists of work. These lists will contain columns for equipment and work identification, scheduling notes, priorities, resources, status, etc. From my observations, there are few, if any, maintenance computer systems that are better at managing work lists than a good spreadsheet. In fact, many organizations cannot effectively manipulate work lists without first downloading them from their maintenance computer system to Excel.

Most maintenance computer systems are excellent at recording maintenance costs against work orders, and should always be employed for this purpose. However, if using them to manage work lists is cumbersome, a well-designed and well-managed spreadsheet process that is integrated with the maintenance computer database should be used.

Backlogs can be easily maintained and kept up to date by utilizing your site’s server to manage and share work lists with secured templates and disciplined management.

In conclusion, managers must pay close and frequent attention to backlogs to ensure they are “clean,” the estimated times are realistic and that they stay close to the optimum size.

Too many times and in too many organizations, we send people off to training with high expectations that they will lead change and improvement upon their return. We hope that overnight they have become equipped with the skills and knowledge to effectively start doing their new work. Unfortunately, weeks later, the trainee and his or her organization are frustrated with the lack of progress.

Where did we go wrong? Let’s take a recent case that I had the opportunity to witness. ABC Company chose to send its new planner/scheduler off to be trained in the proper procedures for planning and scheduling work. The planner came back to the site with his newly learned planning skills and attempted to plan and schedule work. On the first day back, he attended the normal morning meeting where the issues of the last 24 hours are reviewed and any items left uncorrected are assigned for the day. After the first hour was spent in the meeting, the planner headed off to his office only to be stopped by the maintenance team leader, who needed a hot work permit completed and delivered to Johnny down at the waste-water treatment area so he could get started welding a bracket for a pump. Thirty minutes later, the planner delivered the permit. After a quick break, the planner finally made it to his desk and turned on the computer. Fred and Susan had been watching for the planner to show up, and they pounced, requesting that he requisition some parts they needed for upcoming work the next day. The planner spent the next hour expediting those parts to ensure they would arrive in time for Fred and Susan to do their job. The planner then looked at the clock on the wall, and it was lunch time already. Where does the time go?

I am going to stop at the half-day mark, but I bet many of you could fill out the rest of the planner’s day based on your experience. For that planner at ABC Company, days two through five didn’t look much different from the first day back on the job.

If we expect different results, then we must do something differently. If we want to move from a reactive environment to a reliability-centered, proactive environment, we must properly plan and schedule work. That work is not today’s work (reactive), but next week’s work. By planning for next week and beyond, we have the time to get the right parts, prepare the job plan, and properly schedule the equipment and resources to do the work.

As part of creating an environment for success, we have to educate all of the interfacing functions on the proper role of the planner/scheduler and how those other functions contribute to the success of the planning and scheduling work. For example, the daily meeting that reviews the last 24 hours is not the place for the planner, who should be looking out a week and beyond. The maintenance team leader needs to take responsibility for issuing the hot work permit on short notice. In the future, when the job is properly planned, the hot work permit form should be part of the job package.

In addition to educating others on the new role, we have to set the proper expectations with that planner/scheduler and hold that person accountable. As an example of educating others, in the case of ABC Company, I used a supervisors’ training course that I was conducting separately to educate all of the operations and maintenance supervisors on the new role of the planner/scheduler. Next, I spent time with the maintenance technicians to explain the role, how it affected them and whom they should seek out to expedite parts.

Preventive maintenance (PM) is a cornerstone of reliability-based maintenance. It’s no surprise then that PM attainment has become a key performance indicator (KPI). However, it may surprise you that in many organizations, maintenance is not primarily responsible for this KPI.

Maintenance always has an important role, of course. It must create the PM program. It must generate, schedule and plan PMs. It also must have staff with enough skill to accomplish the PM. If maintenance is missing any of these things, it has a lot of work to do. But if PM attainment is already one of your current KPIs, these steps are probably already being done.

Yet if those planned PMs are not being accomplished, there is another likely reason besides maintenance capability. The most common reason for scheduled PMs not being accomplished is a change in the production schedule removing access to equipment and lines that were scheduled for work.

This doesn’t absolve maintenance for missing PM due to other reasons. If it doesn’t have the material, people or tools on hand or has poorly estimated the necessary work, maintenance should be accountable. However, big blocks of PM tasks are often missed due to a change in the production schedule. This usually is not in the control of maintenance.

The reason the schedule changes is often due to a short-term need for more production, frequently while there are complaints about the machinery not running well enough. Yet even when the reason for extending the production schedule is to overcome maintenance downtime, the stage has been set for even more downtime to come.

To hold me responsible for a KPI, I should have the means to control it. If I am the maintenance manager, I rarely control the production schedule.

While I like using PM attainment as a metric, my suggestion is for it to be a shared metric among both the production manager and maintenance manager (as well as the production planning staff, if you have one). They should all be held accountable if scheduled PM tasks are not being performed.

In today’s competitive global environment, we are constantly being asked to do more with less. Now more than ever, companies are asking their employees to become more productive, more efficient and more “lean.”

Strategic planning, value stream mapping, reliability engineering, loss elimination – these phrases have become popular from the boardroom to the shop floor. But where does their value really lie? How do we make decisions that uncover value, eliminate loss and allow for proper strategic planning for the future?

The answer lies not in the decisions we make but in the data we use to make those decisions. Good, strong data is key to making good, strong decisions. We are all familiar with the axiom of “garbage in, garbage out.” So what kind of data should we use? What metrics give us the best snapshot of our current levels of performance?

Asset utilization (AU) and overall equipment effectiveness (OEE) are key performance-based metrics that, when calculated and communicated properly, allow for effective facility management and provide sound backing for potentially difficult business decisions to be made. However, much confusion lies in their definition and understanding when and where to use them.

AU is defined as: availability x rate x quality
OEE is defined as: uptime x rate x quality
As you can see, AU and OEE are similar – both calculations for rate and quality are the same, and are defined as follows:

 

where average rate is the speed or efficiency of the system under analysis over a given time period.

The best demonstrated or design rate is determined by the nameplate capacity of the system or the best demonstrated rate recorded. I prefer to use the best demonstrated rate, as empirical data is a better model of the system in question. A 100-percent rate calculation would signal that the system is operating consistently at its maximum demonstrated speed.

where first pass units is defined as the good units produced over a certain time period that meet customer specifications.

These “quality” units are compared to the total units produced, including those that do not meet customer specifications. A 100-percent quality calculation would signify no scrap or rework in the process.

Based on the above formulas, you can easily see that the difference between OEE and AU lies in how you compare availability and uptime. So what is the difference? Availability and uptime are defined as follows:

where operating time is defined as the amount of time that the system is actually operating.

Calendar time is based on the 24/7/365 schedule as we know it. Scheduled time is defined as the amount of time the system under analysis was planned or scheduled to operate. Therefore, if I operate my system for six hours on one eight-hour shift during a given day, my uptime is 6/8 or 75 percent, while my availability is 6/24 or 25 percent. That is quite a difference.

Typically, OEE is used to understand how well systems or assets perform based on current business demands and production schedules. AU allows for a better understanding of how well that system or asset is currently utilized and allows insight for future business planning by calculating what type of production could be achieved.

In some industries (petroleum, specialty chemical, etc.), scheduled time is equal to calendar time. In this case, availability equals uptime, so OEE and AU are equal. This scenario is similar to how a square is a rectangle, but a rectangle isn’t a square.

It is important to understand the terminology. My background in Six Sigma differentiated between availability and uptime for AU and OEE, respectively. Lean terminology defines OEE as the product of availability x rate x quality, with availability defined in the same way uptime is defined (above). So which is correct? The truth is it doesn’t matter. What is important is to clearly define what you and your organization are trying to capture, agree on common terminology to allow for clear and concise discussion, and determine the best way to capture the needed data.

As always, the devil is in the details. When setting up data-capture systems to determine rate and quality and uptime/availability calculations, take care to ensure consistent measurement and data validity.

In summary, AU and OEE can be defined as:

If your company was manufacturing automobiles, appliances, iPads or even cardboard boxes, you certainly wouldn’t think about scheduling production without a complete and accurate bill of materials (BOM) for each finished product so you could determine your raw material requirements from a master schedule. So why is it that many process industries not only begin operation without equipment BOMs, but go for years – sometimes decades – without them?

When we ask people to assess the quality of their equipment BOMs, the comments we get most often are: “They don’t exist,” or “We have some of the data, but we don’t know if it’s accurate.” When we ask why that’s the case, the response is usually: “We don’t have the information,” or more likely, “We don’t have the resources to do all that work.”

So instead of taking the time to build and maintain the BOMs, they just go on without them. That means planners, maintenance and reliability engineers, mechanics, materials management and procurement personnel, and others have to go outside the system to do things like:

  • Determine material requirements for planned work
  • Query and locate parts for emergency and other unplanned work
  • Associate critical spares to specific assets
  • Evaluate part substitutions
  • Assess non-moving material for obsolescence
  • Identify opportunities for part standardization

Somehow there’s time for most of these workarounds, and although it’s hard to calculate, it probably takes two to 10 times as much effort to deal with the lack of information as it would to just fix the problem in the first place. Unfortunately, the focus is on the reactive aspects of what “really needs to be done” and not on the proactive aspects of getting the information into the system. If you don’t keep up, it’s hard to catch up. So what are the five critical success factors for establishing and maintaining effective BOMs?

  1. The most important thing you can do is get the information into the system as early as possible. As soon as you have made a commitment to buy a new piece of equipment, you should be on the phone with the manufacturer or supplier to get the BOM information. Unless there’s a possibility that something could change, you should have the BOM data in the system even before the equipment is installed.
  2. Don’t kill yourself trying to get every last little item into the database. If your system supports an automatic upload of BOM data from an electronic file, then take advantage of it. However, if you have to enter the data manually, make sure you get the most important stuff in there first. A good rule of thumb is that anything you reasonably expect to repair and/or replace should go on the BOM, with the exception of consumables and free issue parts.
  3. Make sure you have a robust process in place to manage changes to the asset base. Whether you call it “management of change,” “configuration management” or “Mikey,” the important thing is that all equipment redesigns, material changes, part substitutions or even significant modifications are assessed to determine the impact on the BOM.
  4. Don’t forget about retired assets. How many times have you had someone look at an expensive or supposedly critical part that hasn’t moved in the storeroom and heard them say, “Oh, we stopped using those years ago when we took out the …” (whatever). It’s so much easier and cost-effective to deal with these situations as they occur.
  5. Make it part of the culture. This isn’t something that can be done randomly or easily driven from the bottom up. It requires management commitment to make it a priority, with clearly defined responsibilities and expectations for each person involved in establishing and maintaining the integrity of the BOMs and accountability for making sure it happens.

Of course, it’s not easy. The easy thing is to do nothing and continue to live with the consequences. Even if you can’t erase all the mistakes of the past, at least put something in place to keep the situation from getting worse. Once you can keep up, it’s easier to catch up.

Setting cautionary and critical limits (or targets) for oil analysis results is essential and irreplaceable as groundwork in an oil analysis program. It’s what helps answer one of the most commonly asked questions: “Is the oil still good?” Nevertheless, the data changes observed, even if they are within the established limits, can still prove to be valuable. In these conditions, trending oil analysis data is where the value is gained and will help answer what might be the next question asked: “When will my oil go bad?”

If you think about it, simply obtaining a snapshot of data from an oil sample is essentially worthless without something to which to compare it. This is why trending data in oil analysis reports is so beneficial. It not only allows you to determine if the current oil properties are unfavorable but also if they will become unfavorable in the near future. Indeed, quality trending provides a powerful means of recognizing when an oil property is moving in an unhealthy or threatening direction.

The most effective way to follow a trend is to consistently collect representative oil samples and track the data from the results by plotting them on a property-versus-time graph. The “property” can be anything from the remaining additives within the oil to the base oil’s changing properties or the number and types of particles.

It is imperative that oil samples are carefully collected and that all variables are minimized or at least addressed. Among the factors that can influence the results include sample location consistency, service life of the machine and oil, makeup oil rates, changes in environmental or operating conditions, oil formulation changes, testing procedure consistency, etc.

The key to success with trending is to learn from the past. This includes others’ past failures, not just those of your machines. Start by identifying when certain oil properties have typically been healthy and use this as the standard. Also, take note of when a change in an oil property has previously led to a machine issue or failure. You must develop the awareness to recognize when a change in a particular property could eventually lead to a problem with the machine.


Figure 1. The world’s population growth

Looking Back at the Past

The world’s population growth offers a good example of the types of trends that can exist within machinery. The earth’s population has been growing for thousands of years, but it wasn’t until around 1800 that it reached 1 billion people. While this was a major milestone, it only took approximately 120 more years to double to 2 billion. Less than 100 years later, the population is rapidly approaching 8 billion people. Many factors have influenced this recent trend, such as the Industrial Revolution and advanced medicine. Figure 1 shows how this rise in population would appear on a graph.

This trend can be compared to the growth of particle contamination in machinery. Particles produce particles. In fact, one particle can generate as many as 20 new particles within a machine. Of course, this will depend on many variables, such as particle ingression rates, the filtration rate, the likelihood of wear generation, etc. Regardless, when particles are the instigator of new particles being created, the contamination can quickly escalate.


Figure 2. An illustration of particle contamination
within a machine

By adding quarterly sampling dates and ISO particle contamination codes to Figure 1, we can illustrate a lubricated machine that was accidentally introduced to new contaminants and resulted in increased wear generation. Note the dramatic trend toward the most recent dates in Figure 2. When this type of growth in particle concentration occurs, it will be linked to an imminent machine failure.

In order to predict an impending rapid growth of particle contamination, oil sampling must be performed frequently enough to detect a slight uncharacteristic increase. For example, in Figure 1, if the world population is measured every 1,000 years, the results would be 0.1 billion, 0.1 billion, 0.1 billion, 0.2 billion, 0.2 billion and finally 7 billion. However, if the population is measured twice as frequently or more, it would be much easier to recognize the start of the abnormal increase. Sampling machines for changing oil conditions is no different.

A few years ago, someone mentioned to me that many of his machines were not good candidates for oil analysis because they used little oil that wasn’t worth saving. He added that by the time you flushed the sampling port and pulled a proper oil sample, you’ve almost done an oil change. Why bother with oil analysis?

I’m sure you recognize the misguided purpose of oil analysis in the mind of this individual. While oil analysis can certainly aid in better timed oil changes, it has so much more to offer. In fact, for machines that are mission-critical, the cost of changing the oil is small potatoes in comparison to the value gained from averting a catastrophic machine failure. If oil analysis was only about tracking the remaining useful life of the lubricant, only a fraction of the oil samples analyzed every year could be economically justified.

Think of the oil more as an information messenger of numerous failure modes and root causes of failure. As I’ve said many times, it’s hard for a machine to be in trouble without the oil knowing about it first. For most labs, the number of non-conforming samples from oil analysis will generally exceed 20 percent. In other words, more than one out of every five samples has a reportable condition that requires a corrective response. For this reason, you must be prudent about which machines are selected for oil analysis as well as the sampling frequency.

Like most reliability decisions, being wise in selecting machines to include in an oil analysis program requires a strategy of precision and optimization. This selection is a critical attribute of the Optimum Reference State (ORS) and demands careful consideration. Included in this is an assessment of machine and lubricant criticality, as described below.

The Importance of Saving the Machine

So many reliability and maintenance decisions hang on the assessment of Overall Machine Criticality (OMC). This includes oil analysis and all other machine condition monitoring methods. Critical equipment should be checked more frequently than less critical equipment. Based on the definition of “critical,” this refers to the machines with the highest importance to you, your company and your process.

Of course, it is essential to know how to define an asset as critical. There are many approaches to determine an asset’s criticality. Some plants employ a simple 1 to 10 grading scale and subjectively assign numbers.

The OMC assesses criticality in the context of lubrication. It is calculated at the multiplied product of the Machine Criticality Factor (MCF), which relates to the consequences of machine failure, and the Failure Occurrence Factor (FOF), which corresponds to the probability of failure.

Using the OMC, a machine’s candidacy for oil analysis is known by influencing factors such as:

  • whether the machine is exposed to failure-inducing conditions (loads, speeds, shock, contamination, etc.);
  • whether the machine is a bad actor (chronic problems);
  • whether the consequences of failure are high (safety, downtime, repair costs, environmental effects, etc.);
  • whether failures can be lubricant-induced (degraded or contaminated oil);
  • whether failures can be revealed by the oil (e.g., wear debris from shaft misalignment); and
  • whether early detection is important.

The Importance of Saving the Oil

The importance of saving the oil is best assessed by the Overall Lubricant Criticality (OLC). The OLC defines the significance of lubricant health and longevity as influenced by the probability of premature lubricant failure and the likely consequences (for both the lubricant and the machine). The proposed method for calculating the OLC is shown above. Like many such methods, this approach is not an exact science but nevertheless is grounded in solid principles in applied tribology and machine reliability.

The Lubricant Criticality Factor (LCF) defines the specific economic consequences of lubricant failure separate from machine failure consequences. The LCF is influenced by the cost of the lubricant, the cost of downtime to change the lubricant, flushing costs and system disturbance costs (e.g., the fishbowl effect). For instance, machines that use large volumes of expensive, premium lubricants will understandably have high LCF values. Studies have shown the true cost of an oil change can far exceed 10 times the apparent cost (labor and oil costs).

The Degradation Occurrence Factor (DOF) defines the probability of premature lubricant failure. The conditions that influence this probability are shown below.

Lubricant Robustness – Synthetics and other chemically and thermally robust lubricants lower the DOF.

Operating Temperature – Lubricants exposed to high operating temperatures, including hot spots, can experience accelerated oxidation and degradation. The presence of such conditions will raise the DOF.

Contaminants – Contaminants such as water, dirt, metal particles, glycol, fuel, refrigerants, process gases, etc., can sharply shorten lubricant service life. The presence of such exposures will raise the DOF.

Lubricant Volume and Makeup Rate – Lubricant volume relates to the amount of additives available to fight oil degradation, the estimated runtime to complete additive depletion and the density of contaminants. In normal service, it can take years to burn through the additives in systems containing thousands of gallons of lubricant. The makeup rate refers to the introduction of new additives and base oil. New additives replenish depleted additives, and new base oil dilutes pre-existing contaminants. High oil volume and a high makeup rate will reduce the DOF.

Machines to Include in Your Oil Analysis Program

Machines that are good candidates for oil analysis have high OMC or OLC values (say, above 5). Even marginal OMC/OLC machines may be well-suited for a streamlined oil analysis program (fewer samples, fewer tests, etc.). Using this methodology, much of the guesswork is taken out of the first major decision related to any oil analysis program. Once your machines are selected, you can then use the OMC and OLC values to determine the oil sample location, oil sample frequency, test slate, alarms and limits, and data interpretation strategy.

When the city of Fort Collins, Colorado, purchased new planetary gearboxes for its wastewater sludge dewatering centrifuges, it decided to implement a proactive used oil analysis maintenance strategy. This decision resulted in significant cost savings.

Unconventional Machinery

Two of the machines were installed new in 1998 at a cost of $619,000 each. They replaced existing sludge dewatering belt-filter presses. The new machines were complex by comparison, requiring additional research in the proper maintenance to ensure years of cost-effective, reliable service. Appropriate maintenance tasks also had to be developed for this new equipment.

The centrifuges are critical to the Water Reclamation and Bio-Solids Division. The belt-filter presses have since been decommissioned, although technically they are on standby as backups. However, for various reasons, startup and re-use of these filter presses are not encouraged.

Each centrifuge has a bowl assembly that is V-belt driven by a 300-horsepower, AC-induction motor at approximately 1,748 revolutions per minute (rpm). The driven bowl speed is roughly 2,800 rpm. A back-driven scroll assembly within this bowl rotates at approximately 3 rpm less than bowl speed. The scroll regulates the rate at which the dewatered solids exit the machine in order to obtain an optimum percentage of dewatered bio-solids. The scroll is powered by a back-drive system that consists of a 100-horsepower DC motor and synchronous belt drive turning the planetary gearbox. The average service duty is approximately six to 10 hours per day, four to five days per week. Loss of one of these centrifuges would mean extended hours of operation for the remaining centrifuge and more labor hours for the operations staff.

A Proactive Maintenance Strategy

 

Sludge dewatering centrifuges

After the manufacturer’s maintenance recommendations and guidelines were reviewed, it was concluded that the proper gear oil level would need to be maintained. The manufacturer’s lubrication schedule specified an annual oil drain and replacement of the gear oil. The quantity and type of oil required was 15 quarts of synthetic gear oil. The manufacturer also suggested a gearbox exchange program in which the gearbox would be replaced every two years with a factory-reconditioned gearbox. The exchange cost was estimated at $8,000                 each, not including labor.

The city began exploring ways to extract a representative used oil sample from these gearboxes for oil analysis. If the condition of the gearboxes could be determined through a predictive strategy or a root cause analysis proactive strategy, symptoms of early impending failure could be monitored and detected. The root causes of these detected symptoms could then be identified and eliminated, or at least controlled. Ultimately, oil drains could be extended and the associated maintenance costs reduced with less generation of used oil and less need for new oil replacement. There would also be a greater understanding of the condition of the new gearboxes, increasing reliability and decreasing maintenance costs.

Obtaining Representative Oil Samples

Due to the planetary design of the gearboxes, it was not possible to obtain a representative oil sample during normal operation, and extracting the samples would not be conventional. A small, rigid brass pipe nipple was fabricated into the gearbox oil plug to provide a means of collecting a sample via gravity into an ultraclean container supplied by the offsite oil analysis laboratory.

The sample was collected immediately after shutdown. If it was not obtained within 30 minutes of shutdown, the gearbox outer shell and input shaft were manually rotated numerous revolutions before flushing the sample collection fitting and taking the sample.

After frequent oil sampling to establish baseline data, the sample frequency was initially set at quarterly and later updated to 500 hours of operation. An hour meter was also installed on the drive motor instrumentation.

Representative used oil samples, as well as new oil samples, were collected, labeled with all pertinent data and sent to the oil analysis lab. Cleanliness targets were set at ISO 18/16/13. The new samples would provide a means of determining the physical properties and additives within the oil.

The lab’s oil analysis test slate included a variety of standard tests such as ISO cleanliness, kinematic viscosity at 40 degrees C and 100 degrees C, water by Karl Fischer, acid number and elemental spectroscopy. All of the oil analysis reports and recommendations received from the lab were closely reviewed to monitor the oil’s physical properties, contamination and/or wear metals.


Water by Karl Fischer from 2007-2013

Extending Oil Drains

The oil was not replaced at the manufacturer’s suggested one-year runtime interval because the oil analysis reports continued to be favorable. ISO cleanliness was diligently maintained through offline kidney-loop filtration using micro-glass filters.

As the two-year maintenance interval approached, there was no data-driven reason to replace the gear oil, let alone to exchange the gearboxes. The original equipment manufacturer (OEM) was contacted to inquire whether exchanging the gearboxes was still recommended. The OEM’s response was, “You are doing what no other customer of ours currently does. I’d keep it up. The exchange program is not necessary with this comprehensive strategy.”


Acid number from 2007-2013


Oil viscosity at 40 degrees C from 2007-2013

Continued Cost Savings

Now fast forward to 2013. The two planetary gearboxes are still in service, and all parameters continue to be carefully monitored. The original oil was replaced with an ISO viscosity grade 220 polyalphaolefin (PAO) synthetic gear oil after the 10 gallons of oil that came with the new machines ran out. Considering the initial estimate of $8,000 per gearbox every two years, the cost savings of reconditioned gearbox purchases alone totaled more than $112,000.

The planetary gearbox assembly with the guard shroud removed for viewing

The city of Fort Collins’ oil analysis program continues today in the capable hands of the Water Reclamation and Bio-Solids Division’s maintenance staff. These skilled craftsmen still employ a proactive maintenance strategy with these gearboxes as well as with many other assets in the division. On several occasions, in order to thoroughly clean the internal cavities of these machines, the staff has completely disassembled, cleaned and rebuilt each of these centrifuges, saving the city tens of thousands of dollars. Through it all, the planetary gearboxes have been removed and re-installed without the need for replacement.

While the city’s oil analysis program includes a number of other assets, with several worthy of their own testimonials, the proven strategy for these two planetary gearboxes continues to speak for itself.

“Our recent oil analysis results showed high levels of potassium in the engine oil, which we assumed would have been caused by glycol. However, after a thorough investigation, we discovered that no antifreeze or glycol was leaking. Besides glycol, what else could be causing high potassium in engine oil? Are there any other potential sources?”

Potassium contamination is generally linked to glycol (the primary constituent of antifreeze) along with other elements like sodium, boron, chromium, phosphorus and silicon. While many of these elements may have originated from other sources, potassium is almost uniquely found due to glycol contamination.

It should be noted that potassium could be sourced from airborne contaminants like fly ash, paper mill dust, road dust and granite, and even found as a trace element in fuel. Your environment may help you determine which type of airborne contaminant might be causing the problem. However, if you suspect glycol contamination as a result of a coolant leak, you should run additional oil analysis tests to confirm it. These tests would include Fourier transform infrared (FTIR) spectroscopy, gas chromatography (GC), Schiff’s reagent method and the blotter spot test.

In the same way that lubricants are a formulation of base oils and additives, antifreeze is formulated with additives that have various functions. One of these additives contains potassium. This element (along with sodium) becomes a common indicator of glycol contamination in engine oil because it is one of the most stable elements within glycol. To verify whether glycol is the contributor to the potassium in the oil, one of the above tests should be performed.

FTIR analyzes molecules instead of elements using an infrared spectrometric technique. The spectrum reported can show increases and decreases of molecules at specific absorption bands, including glycol. This test method can provide a large range of potential indicators but is often challenged with interferences from other contaminants and properties. It also cannot indicate glycol contamination until it reaches at least 1,000 parts per million.

Gas chromatography is a much more accurate means of measuring glycol in an engine oil. This method employs centrifugation to separate the glycol. A chromatogram is then generated to indicate polar compounds. Although this test has limitations that could lead to false positives or negatives, an experienced laboratory using the proper technique should be able to provide the results needed to confirm glycol contamination.

Limit and warning levels from elemental spectrometric analysis serve as indicators of the amount of foreign particles found in used oil that is still tolerable or, when compared with fresh oil, indicate when the altered lubricant must be changed. Values well above tolerable wear levels can also indicate an acute damage process. However, it is not easy to specify these warning levels. Hardly any engine or equipment manufacturer defines limit levels for used oil. This is because the operating conditions and times are too specific, and the origins of the foreign particles found in the oil are too diverse. Consequently, determining these factors is one of the essential tasks of every oil analysis. After all, the type, quantity and (to a certain extent) the size of the particles provide valuable information about wear, contaminants and the additives in the oil.

When warning and limit levels are used for the diagnosis of a specific oil specimen, the interactions between the values and other criteria should also be taken into account. A variety of factors play a role here, including the engine manufacturer, the engine type, the type of fuel used, the oil volume, the motor oil type, the service life of the motor oil, and any top-up quantities (makeup oil). The operating conditions can also vary markedly from one situation to the next. After all, the engine of a heavy construction machine operates under different conditions than the same engine of a truck traveling long distances on a highway at uniform speed.

However, all of these engines have one thing in common: their motor oil contains a lot of valuable information about the oil itself as well as the state of the engine. For example, the microscopic particles suspended in the oil provide an indication of the amount of wear of the corresponding parts or components. Elements such as sodium, potassium or silicon indicate contamination by road salt, hard water, glycol antifreeze or dust. Comparing the amount of organometallic additive elements (such as calcium, magnesium, phosphorus, zinc, sulfur or boron) in the used oil to fresh oil provides an indication of changes to the oil, such as additive depletion or possibly the mixing of different types of oils.


Table 1. Wear elements

Inductively coupled plasma (ICP) elemental analysis can be used to determine more than 30 different elements in motor oils. In addition to the presence of the elements, atomic emission spectroscopy (AES) by ICP can be used to determine the concentrations of the elements.

Laboratories routinely determine the following elements and values as part of motor oil testing and list them in the lab report: iron, chromium, tin, aluminum, nickel, copper, lead, calcium, magnesium, boron, zinc, phosphorus, barium, molybdenum, sulfur, silicon, sodium and potassium. In some cases, other elements are also determined, such as silver, vanadium, tungsten or ceramic elements like cerium and beryllium, which are rarely present in motor oils. They are only listed in the lab report if they are actually proven to be present or if the customer specifically requests this. Tables 1-3 show the possible causes for the presence of the elements found in oil, i.e., whether they are related to contaminants, wear or additives.


Table 2. Contaminants

Various factors must be taken into account when interpreting a lab report and the values of the elements found in the oil. Naturally, it is not sufficient to simply report the elements and their quantities. In order to assess the measured values, you must know whether the individual elements indicate contamination, wear or changes to the additives. However, these values are also interrelated to a certain extent. The relative proportion of various wear elements provides an indication of the affected machine parts or components, for example. Further, it is important to know how long it has taken for the oil to become enriched with specific wear elements since the last oil change. The operating time of the overall system or the running time of the engine, the oil volume relative to the engine power, and the top-up amounts must also be considered when analyzing or diagnosing warning levels.

In order to reliably assess the values determined for the used oil and their relationship to each other and to other factors, it is necessary to have a suitably large volume of data and analytical expertise. However, additive elements and base oil types can differ considerably depending on the type of oil used, so it is necessary to set suitably broad warning levels. Specific warning levels can only be defined for a specific oil type.


Table 3. Additives

The warning and limit levels listed in Tables 1-3 for wear elements, contaminants and additives are based on a semi-synthetic motor oil (SAE 10W-40) in a modern diesel engine with an oil volume of approximately 25 to 50 liters, using fuel compliant with EN 590 (containing 5 percent fatty acid methyl esters), and with an oil service life of approximately 500 operating hours or a mileage of approximately 47,000 miles.

The basic rule is that warning levels must be set lower for greater oil volume, shorter oil service life, lower engine speed and lighter load conditions.

However, the stated values are distinctly dependent on the oil manufacturer, the correct engine type, the service life of the oil charge, the oil volume and the top-up quantities (if any).

The ability to interpret oil analysis results is crucial for guiding decisions about preventive maintenance activities. Having someone in your organization who can pick up a report and interpret it in the context of the environment is essential. This is a skill that can easily be developed with a minimal investment in training and certification. This article will address the fundamentals of oil analysis and how to interpret the resulting reports.

Reviewing the Report

Once an analysis is completed, it is important to review the report and interpret the accompanying data. Based on the report, you can determine whether action is needed. The report does not always pinpoint specific problems, but it does provide a starting point for analysis.

Each test should be clearly identified. The information usually is organized in a spreadsheet format with numbers indicating the test results. When looking at your reports, the first thing you should do is to ensure that they are indeed your reports. Be certain the report includes your name, lube type, machine manufacturer and machine type.

The report should also clearly state your machine and lubricant condition. The laboratory should have a rating system that notifies you of normal, marginal and critical levels. In addition, the report should include comments from the analyst who reviewed your results. These comments will help you gauge the criticality of the problem and provide a suggested course of action.

Interpreting Viscosity Results

Viscosity is the most common test run on lubricants because it is considered a lubricant’s most important property. This test measures a lubricant’s resistance to flow at a specific temperature. If a lubricant does not have the right viscosity, it cannot perform its functions properly. If the viscosity is not correct for the load, the oil film cannot be established at the friction point. Heat and contamination are also not carried away at the appropriate rates, and the oil cannot adequately protect the component. A lubricant with improper viscosity can lead to overheating, accelerated wear and ultimately the failure of the component.

Industrial oils are identified by their ISO viscosity grade (VG). The ISO VG refers to the oil’s kinematic viscosity at 40 degrees C. To be categorized at a certain ISO grade, the oil’s viscosity must fall within plus or minus 10 percent of the grade. So for an oil to be classified as ISO 100, the viscosity must fall within 90 to 110 centistokes (cSt). If the oil’s viscosity is within plus or minus 10 percent of its ISO grade, it is considered normal. If the oil’s viscosity is greater than plus or minus 10 percent and less than plus or minus 20 percent, it is considered marginal. Viscosity greater than plus or minus 20 percent from grade is critical.

ISO VG Mid Point Limits (KV 40° C) ISO VG Mid Point Limits (KV 40° C)
KV 40° C

 

 

 

 

mm2s-1

Min. Max. KV 40° C

 

 

 

 

mm2s-1

Min. Max.
ISO VG 10 10 9 11 ISO VG 460 460 414 506
ISO VG 15 15 13.5 16.5 ISO VG 680 680 612 748
ISO VG 22 22 19.8 24.2 ISO VG 1000 1000 900 1100
ISO VG 32 32 28.8 35.2 ISO VG 1500 1500 1350 1650
ISO VG 46 46 41.4 50.6 ISO VG 2200 2200 1980 2420

Measuring Metals: Elemental Spectroscopy

Analyzing an oil analysis report involves understanding the concentration of expected and unexpected elements in your oil. Some contaminants are picked up as the oil circulates and splashes off different machine components and surfaces. Other contaminants can enter the machine during manufacturing or routine service, as well as through faulty seals, poor breathers or open hatches. No matter how the contaminants enter the oil, they can cause significant damage.

Elemental spectroscopy is a test used to determine the concentration of wear metals, contaminant metals and additive metals in a lubricant. A concentration of wear metals can be indicative of abnormal wear. However, spectroscopy cannot measure particles larger than roughly 7 microns, which leaves this test blind to larger solid particles. As with any type of testing, spectroscopy is subject to inherent variance.

When oil additives containing metallic elements are present, significant differences between the concentrations of the additive elements and their respective specifications can indicate that either incorrect oil is being used or a change in the formulation has occurred. Also, keep in mind that sump sizes can vary in custom applications.

Understanding Wear Limits

When reviewing the wear levels in your test results, look at the trend history of each machine, not just the recommendations from the original equipment manufacturer (OEM). OEMs offer good benchmarks, but it is not wise to just follow their recommendations because most machines are used differently.

For example, two identical pieces of equipment may have vastly different elemental spectroscopy results due to variations in load, duty cycle and maintenance practices. Their results might even show a variety of particle count levels. Both machines could still be considered healthy based on the trending of the analysis.

Trending is extremely important in determining a machine’s health. A good rule of thumb is to use your best judgment and review the trend data. Has anything changed with the operating conditions? Have you been running the machine longer? Have you been putting more load on the machine? You should also discuss the test results with the lab analyst before making any decisions.

Watch for Contaminants

Contamination causes a number of oil system failures. It often takes the form of insoluble materials such as water, metals, dust particles, sand and rubber. The smallest particles (less than 2 microns) can produce significant damage. These typically are silt, resin or oxidation deposits.

The objective with contaminants is to detect the presence of foreign materials, identify where they came from and determine how to prevent further entry or generation. Contaminants act as a catalyst for component wear. If the cycle is not broken, wear accelerates and downgraded serviceability results.

Typical elements that suggest contamination include silicon (airborne dust and dirt or defoamant additives), boron (corrosion inhibitors in coolants), potassium (coolant additives) and sodium (detergent and coolant additives).

Quantifying the Amount of Water

When free water is present in oil, it poses a serious threat to the equipment. Water is a very poor lubricant and promotes rust and corrosion of metal surfaces. Dissolved water in oil produces oxidation and reduces the oil’s load-handling ability. Water contamination can also cause the oil’s additive package to precipitate. Water in any form results in accelerated wear, increased friction and high operating temperatures. If left unchecked, water can lead to premature component failure.

The Karl Fischer coulometric moisture test is the most common method used to analyze water levels in oil. When reviewing these test results, remember that low levels of water are typically the result of condensation, while higher levels can indicate a source of water ingress. In most systems, water should not exceed 500 parts per million.

Common sources of water include external contamination (breathers, seals and reservoir covers), internal leaks (heat exchangers or water jackets) and condensation.

Determining Oil Condition: Acid Number

Acid number (AN) is an indicator of oil condition. It is useful in monitoring acid buildup. Oil oxidation causes acidic byproducts to form. High acid levels can indicate excessive oil oxidation or additive depletion and can lead to corrosion of internal components.

Acid number testing uses titration to detect the formation of acidic byproducts in oil. This test involves diluting the oil sample and adding incremental amounts of an alkaline solution until a neutral end point is achieved. Since the test measures the concentration of acids in the oil, the effects of dilution often negate the effectiveness of acid number testing.

Similarly, some oils containing anti-wear or extreme-pressure additives that are mildly acidic can also provide false high or low readings due to additive depletion. Acid number values should be considered in concert with other factors such as additive health and water content.

Gauging Particle Counts

The concentration of wear particles in oil is a key indicator of potential component problems. Therefore, oil analysis must be capable of measuring a wide range of wear and contaminant particles. Some types of wear produce particles that are extremely small. Other types of wear generate larger particles that can be visually observed in the oil. Particles of any size have the propensity to cause serious damage if allowed to enter the lubricating oil.

Particle count analysis is conducted on a representative sample of the fluid in a system. The particle count test provides the quantity and particle size of the various solid contaminants in the fluid. The actual particle count and subsequent ISO cleanliness code are compared to the target code for the system. If the actual cleanliness level of a system is worse than the desired target, corrective action is recommended.

Particle counts generally are reported in six size ranges: greater than 4 microns, greater than 6 microns, greater than 14 microns, greater than 25 microns, greater than 50 microns and greater than 100 microns. By measuring and reporting these values, you can gain an understanding of the solid particles in the oil. Monitoring these values also can help confirm the presence of large wear particles that cannot be seen through other test methods. However, particle counting simply indicates the presence of particles and does not reveal the type of particles present.

ISO Cleanliness Code

The ISO cleanliness code is utilized to help determine solid contamination levels in both new and used oils. The current ISO standard for reporting cleanliness is ISO 4406:99.

In accordance with this standard, the values used from the particle count data are related to the greater than 4, greater than 6 and greater than 14 micron levels. The raw data at these micron levels are compared to a standard table and then translated to a code value.

It is important to understand the concept behind the ISO code table. The maximum value of each level is approximately two times the value of the preceding level. This means the minimum value of each level is also nearly double the minimum value of the preceding level. This is accomplished by using the ISO code, which is a value that is an exponent of two, dividing that result by 100 and then rounding.

ISO 4406 Chart
Range Number of Particles
per 100 ml
Number More than Up to and including
24 8,000,000 16,000,000
23 4,000,000 8,000,000
22 2,000,000 4,000,000
21 1,000,000 2,000,000
20 500,000 1,000,000
19 250,000 500,000
18 130,000 250,000
17 64,000 130,000
16 32,000 64,000
15 16,000 32,000
14 8,000 16,000
13 4,000 8,000
12 2,000 4,000
11 1,000 2,000
10 500 1,000
9 250 500
8 130 250
7 64 130
6 32 64

Analytical Ferrography

Analytical ferrography is among the most powerful diagnostic tools in oil analysis today. When implemented correctly, it can be an excellent tool for identifying an active wear problem. However, it also has limitations. Analytical ferrography is frequently excluded from oil analysis programs because of its comparatively high price and a general misunderstanding of its value.

The results of an analytical ferrography test typically include a photomicrograph of the found debris along with specific descriptions of the particles and their suspected cause. Particles are categorized based on size, shape and metallurgy. Conclusions can be made regarding the wear rate and health of the component from which the sample was drawn. The analyst relies on composition and shape to determine the characteristics of the particles. Due to the subjective nature of this test, it is best to trust the analyst’s interpretation regarding any action to be taken. This test is qualitative, which means it relies on the skill and knowledge of the ferrographic analyst.

While most lubrication professionals rely on oil analysis to help safeguard their equipment from unplanned downtime, an inability to dissect and comprehend a problematic report often yields inappropriate action when abnormal results appear. Your lab can only provide you with the machine condition data. It is up to you to take action.

“What is the best method to detect soot in diesel oil? Our labs use Fourier transform infrared (FTIR) spectroscopy as a primary means of measuring soot in used diesel engine oils. I have been told that viscosity at 40 degrees C is a good indicator as well. Which instruments or methods will give me the most accurate soot level?”

There are several available tests that can detect soot load in diesel oil. As a screen test with a lower cost, FTIR is a great indicator of soot. It is capable of measuring more than a dozen parameters, with some more reliable than others depending on the susceptibility to interference in the established wavenumber region. While the data collection is relatively easy, there are challenges with measurement accuracy, especially as the size of the soot particles increases and constituents like dirt are included. The maximum detection limit can range from 1.5 to 5 percent. This is concerning since the critical limits for engines with exhaust gas recirculation (EGR) may be 8 percent, and non-EGR systems may be around 5 percent.

Other alternatives include the pentane insolubles test, the light extinction method and thermogravimetric analysis. The pentane insolubles test consists of separating insolubles from the oil with the aid of a solvent mixed with the lubricant. The solvent may commonly be pentane and toluene. The insolubles are flung out of the mixture with a centrifuge or filtered out with a filter membrane. While this is a preferred method with a lower cost, it poses concerns when other insolubles are included, as they will be measured together.

The light extinction method involves light being casted at specific frequencies through the oil and then measuring its obstruction by the drop in voltage. Again, there are some issues with this method related to other potential objects obstructing the light, even water and air bubbles.

Thermogravimetric analysis may be the most accurate measurements of soot load in an oil sample. It requires heating a sample through different stages to calculate the soot concentration by comparing the difference in weight of the volatile ash components to the original sample. This test may have a much higher cost, so it is not a viable substitution for routine tests. Nevertheless, it is perfectly acceptable for exception testing after a screening test has been performed.

Soot dispersancy is an important lubricant property. It is defined as the lubricant’s ability to keep soot particles finely dispersed and avoid agglomeration into larger soot particles. This can be measured with a simple method known as the blotter spot test, which allows for a visual representation of the soot’s dispersancy.

“Can I use a lubricant after its expiration date to lubricate a machine? What test is required before its use?”

Whether you should use a lubricant after its expiration date depends on the type of lubricant. For instance, if it is a highly additized lubricant like engine oil, the likelihood of the additives stratifying (separating out of the oil) and settling to the bottom of the container is very high. However, if it is an R&O (rust and oxidation-inhibited) oil with few additives, this would not be as critical of a concern.

In addition, you must consider how the lubricant has been stored. Has it been left out in the elements or was it stored indoors with air conditioning and desiccant breathers on the containers to stop dust and moisture from entering the oil?

Also, how long has the lubricant been stored — two months, five years, etc.? Is the lubricant used in a machine that is critical for production or is it utilized in a component that has a lot of redundancy (backup processes) and will not affect production if it goes down?

The volume of oil is another important factor in whether you should replace the lubricant, throw it out or have it tested and reconditioned. If there is any question as to the quality of your oils, you should have them tested or replaced. Of course, you must determine whether it is cost-effective to have the lubricant tested/re-additized.

After you have answered these essential questions, the following tests should be performed:

It is recommended that lubricant oils be stored no longer than 12 months and greases no more than six months. Also, for optimal protection of your machines, use the first-in/first-out (FIFO) storage method to maintain lubricant freshness.

“I am confused by an oil analysis result I received a few days ago. It was for a sample from a gas engine. The concentration of copper and lead were high, so I decided to dismount the engine and check some of the parts and bearings, but didn’t see any abnormal wear. I’m so frustrated. When do you know you need to open an engine in order to prevent catastrophe? In your experience, when is it necessary to open an engine (whether gas or diesel) after having received an oil analysis report?”

Typically, when copper and lead are present in an oil analysis report, it is an indication of bearing wear. Determining whether it is justified to remove the engine and rebuild will require further investigation. Just one oil sample rarely provides enough information to make a diagnosis. You should ask a number of questions before deciding to dig into the engine, such as:

  • What are the current levels of copper and lead?
  • What were the levels in the previous analysis?
  • When was the last time an oil change was performed?
  • Has the rate of change progressed since the last oil sample?
  • What are the possible causes of copper and lead in the oil sample?
  • Is further testing needed?
  • Are there any other warning signs?

While a rise in copper and lead may be related to bearing wear inside an engine, there are other possible sources. Copper can be found in journal bearings, various bushings, radiators and even as an anti-wear additive in the oil. Lead is primarily used in journal bearings but is also employed as solder in radiators. Any radiator or oil cooler leaching may appear as elevated levels of copper in the report.

To investigate an abnormal result, first make certain the sample was not labeled incorrectly. Communicate with the laboratory to ensure a mistake wasn’t made on their end. Another oil sample should be taken to verify the data. If the results are conclusive, additional testing should be performed to find the root cause of the issue. For example, a filtergram could provide helpful information in determining whether the copper and lead results were from bearing wear or a coolant leak.

Examining other elements within the report may reveal more details. If a bearing is wearing to the point of catastrophic failure, the iron content should show a rise in particle count as well. If a coolant leak is present, an elevated potassium result may be shown on the report.

Remember, when using oil analysis as a deciding factor for rebuilding equipment, you first must have sufficient evidence that a rebuild is warranted. This means one test result may not be enough. The full benefits of oil analysis are obtained when a trend can be established and monitored. This may require multiple samples and someone who is well-versed in interpreting the results.

“What are the acceptable limits of nitrogen oxides in the exhaust of gas engines? Recently, one of our customers was facing nitration issues in a gas engine. A nitrogen oxide test was recommended, and the result was 700 milligrams per cubic meter (mg/m3), which was adjusted to 300 mg/m3.”

Oils can break down in a variety of ways. One of the most prevalent ways is the degradation of the base oil. Oxidation is perhaps the most widespread form of this degradation, but nitration is also common, especially in some engines.

Nitration is the breakdown of the base oil caused by the reaction of oil molecules with different compounds to form nitrous oxides and other nitrogen compounds. This can occur in a number of manners, but the most frequent is due to combustion issues.

As nitration progresses, more nitrous oxide compounds are formed, which can lead to an increase in the acidity of the oil, the likelihood of lubricant malfunction and subsequently the wear of internal engine surfaces.

Most oil analysis laboratories test for nitration through a test known as Fourier transform infrared (FTIR) spectroscopy. In this test, an infrared beam is passed through the oil and absorbed at different wavelengths, which correspond to different contaminants and constituents within the oil. This test provides reliable information on things such as soot, nitration, oxidation, fuel, glycol and water.

FTIR is most effective when a reference sample is used for comparison. The new oil should be tested, and the associated wave signature reported. This will serve as the reference signature against which all in-service oil samples will be compared. As a used oil sample is tested, the wave signature is overlaid on the new oil reference signature, and the difference of the spectrum signifies the contaminants or breakdown of the base oil.

When it comes to nitration, the typical absorbance length is approximately 1,630. This wavenumber has a few interferences such as viscosity index improvers and dispersants, but for the most part it is an adequate representation of the amount of nitration compounds found in the sample.

As for setting a limit for nitration, you must use the comparative sample. A cautionary limit is typically a 25-percent increase, while a critical limit would be about a 75-percent increase. This does require some consideration of the engine type and fuel type.

By monitoring the base oil health as well as additive health, you can ensure that you are changing the oil on time before the oil goes bad or excessive wear occurs inside the engine.

For an effective oil analysis program, it is essential to set goals for tangible and defensible improvements. Here are four key areas to focus on this year: optimizing oil sampling, eliminating uncertainty in lab report interpretation, advancing your equipment list and following a path to improvement. Select one or all four, and seek team buy-in as soon as possible.

Optimize Oil Sampling

Take the time to assess your company’s oil sampling strategy and execution. There are always methods of enhancing your technique, sample location, sampling device or frequency. Typically, the individuals in charge of extracting samples are not offered the opportunity to offer their ideas for improvement. In addition, oil sampling technicians often are not encouraged to analyze lab reports.

This year, make time to do the following: Join your lube tech for a round of pulling routine samples, discuss optimal sampling techniques as a group, learn from the lube tech where sample locations can improve, and urge everyone on your team to review the lab reports and provide suggestions.

If the same contamination alarms are triggered repeatedly, examine the oil analysis reports to identify where the sampling technique or location could be improved. Recurring water contamination or heavy wear particle contamination from the same sampling locations usually indicate a need for modifying the sampling technique or location.

Eliminate Uncertainty in Report Interpretation

Laboratories do not expect their clients to understand everything in an oil analysis report without training. To fit test results on one- or two-page reports, labs frequently use acronyms, abbreviations and abridged language that may not always be as easily understood as they should be. It’s rare when all stakeholders in a reliability group are trained to interpret results and can put the puzzle pieces together from the package of tests run on a sample. Laboratory personnel know this and expect interested parties to inquire for the benefit of making informed decisions that can increase machine reliability. Therefore, call, email and request training from your lab. It’s their job to help you.

Advance Your Equipment List

Assist your lab in improving the analysis by providing more detailed information about your lube systems and critical equipment. The following details will be helpful: system oil capacity, date of last oil change, date of last filter change, correct oil brand and viscosity. Ask your lab for the list of sample locations and lubricated equipment to verify existing database records and supply any additional information required.

Follow a Path to Improvement

Set goals for your oil analysis program and define a path for improvement. The most common goal is to reduce the number of warning alarms triggered by equipment or oil condition issues discovered in lab reports. If a routine batch of monthly samples averages 30 warning alarms, try to cut that number in half this year.

The first step to reduce warning alarms is pinpointing where the lab has set unattainable alarm thresholds. In other words, the alarms aren’t properly set. If you make a reasonable request that doesn’t risk the reliability of your equipment, your lab should adjust these threshold alarms.

Once your team has bought into the goal of reducing alarms and all members understand how they can make a difference, you will naturally chip away at the problems and begin to make a positive impact on your oil analysis results.

Many programs focus intently on the ISO code for oil cleanliness. If your organization has already tackled the low-hanging fruit, it might be time for a joint effort to lower the ISO code by one or two levels this year. Attempt this strategically and set a reasonable goal. For example, if certain screw compressors always have higher ISO code results, then agree to exclude them from the calculation.

Oil analysis labs pay close attention to wear metal contaminants detected in emission spectroscopy. Pinpoint the lubricated equipment that are your repeat offenders for high levels of iron, copper, tin and lead, and determine what will be necessary to make those numbers drop this year. Strive to stop letting those machines trigger warning alarms on your lab reports month after month.

Finally, keep in mind that the most impactful way to quantify improvement, especially for upper management, is to lower maintenance spending without compromising asset reliability over the short and long term.

The well-known KISS principle (keep it simple stupid) was first coined in the 1960s and began widespread use in the U.S. Navy shortly thereafter. While it started as a design principle for engineers, it has since been applied to any activity or creative endeavor that has had the propensity to become unnecessarily complicated. What becomes overly complicated also becomes, by default, poorly understood and sparsely used. Conversely, the greater genius in design and engineering lies in achieving the design objective through simplicity and pureness of form.

This can be applied to the world of oil analysis in many ways. Increasingly, oil analysis has become engulfed by complex analytical chemistry and mathematical algorithms. This science is successful when it takes the complicated, such as an array of particles of varying shapes, sizes, textures, colors and compositions, and puts their formation into plain English (e.g., cutting wear on cylinder walls). It is less successful when it does the opposite, i.e., overanalyzes and overdetails. If someone asks you for the time, there is no need to give an explanation on how a watch works.

Don’t get me wrong, I’m very proud of the technical progress of the oil analysis field and the tremendous value it has brought to the world of machinery reliability. That said, oil analysis should always be viewed in terms of its many forms. These are not competitive but rather should form a focused and unified activity, each with inherent strengths and weaknesses. Collectively, they enable oil analysis to function at its best. Like all reliability initiatives, this should deliver reliability at the lowest possible cost. It optimizes reliability, not maximizes it. It’s about making the right choices.

For instance, for a given machine, how frequently should you conduct laboratory analysis? How frequently should you perform wear particle characterization? These are necessary questions needed to achieve the desired optimum reference state (ORS). The four principle forms of oil analysis are identified and described in Figure 1.


Figure 1. The four principle forms of oil analysis

In recent issues of Machinery Lubrication magazine, I’ve introduced Inspection 2.0 as an important reinvention of conventional inspection practices. I see so many low-hanging fruit opportunities for simple, daily, penetrating machine inspections that often go unnoticed and certainly unexploited. It’s far better to do 100 frequent “screening” inspections than one monthly or quarterly laboratory analysis. Laboratory analysis should still be performed, but it is not a substitute for frequent quality inspections. When this happens, reliability is marginalized and maintenance budgets are wasted.

As a review, Inspection 2.0 can be summarized by the following tenets:

  • Culture of reliability by inspection (RBI)
  • Advanced, tactical inspector skills
  • Machine inspection readiness with inspection windows
  • Advanced inspection tools and aids
  • Inspection protocol that is aligned to failure modes
  • Early fault and root cause emphasis
  • Origin of more than 90 percent of unscheduled work orders

Tactical Inspections Are Purposeful Inspections

With the exception of taste, our four other senses can be effectively used, individually or collectively, for frequent tactical inspections. The concept of tactical inspections is inspection with a purpose. It is not just going through the motions down a checklist. For instance, you don’t just look at oil but rather examine it for specific reasons. The inspectors must know the reasons.

This examination seeks to answer several questions about the health of the oil, the health of the machine and the state of the oil to protect the machine from premature failure. Inspectors should be hunting for something that often is inherently hard to find or notice. The machine, through the oil, will telegraph a signal. The strength of that signal increases as functional failure approaches. Early fault detection is the objective and is best achieved by tactical inspections. I’ll talk about how this can be done visually.

There are no scientific instruments, sensors, algorithms or computers that can outperform the eyes and mind of a human inspector. To get the most out of your sense of sight, you need to know what you’re looking for. Start by constructing a list of root causes and symptoms.

Inspection seeks to find critical states of the oil that cause failure (roots of failure) or reveals active failure in progress (symptoms). As an example, for a diesel engine oil this might be the oil level, soot dispersancy, fuel dilution, coolant contamination and sludge. For an industrial gearbox, you might want to look for a wrong oil level, dirty oil, water contamination, excessive wear debris, aerated oil and an overextended oil drain.

By knowing the questions, you can work backward to define the tactical inspection protocol that provides the answers. This is a two-step process:

  • Causes and Symptoms (C&S) – For every machine or system component, list what is important to find (ranked by importance).
  • Critical Occurrence States (COS) – For each item on this list, create an inspection protocol that would reveal the state of occurrence (the earlier the better).

A Well-trained Eye

Using the industrial gearbox example, let’s rank the causes and symptoms guided by past experience and help from technical advisors. After each item on the following list are one or more ways to enable earlier alerts by visual inspection.

  • Wrong Oil Level: Level gauge inspections
  • Dirty Oil: Exposed headspace (vents, breather, hatch, etc.), filter in bypass, rapid rise in the filter pressure differential, entrained air problems, sediment in bottom sediment and water (BS&W) bowls, blotter test sediment
  • Water Contamination: Cloudy oil, free water in BS&W bowls, rust on the corrosion gauge, hydrated desiccant breather, entrained air problems, positive result from a crackle test
  • Excessive Wear Debris: Metallic sediment in BS&W bowls, laser pointer inspection, loaded magnetic plug, metallic debris on the filter’s surface, magnet inspection of oil sample
  • Aerated Oil (Entrained and/or Foam): Sight glass inspection (cloudy or frothy oil), sudden rise in the oil level, hatch inspection, rise in the oil temperature, emulsified water
  • Overextended Oil Drain: Sight glass inspection (dark, sludgy oil), dirty oil, excessive wear debris, soft insolubles on blotter, air-handling problems

After each inspection (that passes), the inspector should have a high level of confidence that there are no active or abnormal C&S conditions related to the oil or machine. This is done by skillful inspection in search of the COS. If you engineered your inspection protocol properly, it would be extremely difficult for there to be an active C&S in progress without a positive alert from an inspection of each of the COS. These critical occurrence states are designed to effectively reveal C&S events.

Routine Inspections

A routine inspection consists of quick and frequent inspection events not generally requiring the use of tools, pulling a sample or special inspection aids. The following are examples of routine visual inspections related to lubricating oil:

Oil Level – Visually inspect the dipstick, level gauge or sight glass.

Oil Color and Clarity – This involves a sight glass inspection aided by a strong light. Usually a comparator image is used.

Foam Presence and Stability – This can be determined by some sight glasses or headspace inspections, or both.

Entrained Air Presence and Stability – Also generally assessed by sight glasses and headspace inspections.

Free Water – Inspect water traps or BS&W bowls for a free water phase.

Emulsified Water – Inspect sight glasses for turbidity.

Oil Sediment and Floc – Inspect sight glasses and BS&W bowls for stratified solids and soft insolubles.

Gauge and Sensor Inspections – These inspections utilize various digital and analog gauges, including temperature, pressure and flow. Some machines have sensors that report oil properties, such as particle count, wear particle density, water contamination and viscosity.

Heat Gun Inspection – This provides a quick, quantitative assessment of the oil temperature on critical machine surfaces.

Magnetic Plug Inspections – Some sight glasses have integrated magnetic plugs for quick and effective observation.

Headspace Inspection – Hinged hatch access aided by a strong light can enable observation of bathtub rings, varnish and foam.

Corrosion Gauge Inspection – Similar to magnetic plugs, these gauges can be quickly inspected to reveal corrosive conditions associated with corrosion agents, impaired rust inhibitors, etc.

Leakage Inspection – Failed seals and radial shaft movement can cause leakage, but this can also be due to a sudden drop in oil viscosity, change in oil chemistry or ingression of certain liquid contaminants.

Exception Inspections

Exception inspections are conducted either because of a reportable or questionable routine oil inspection or as the result of an abnormal operating condition. Most exception inspections require the extraction of an oil sample and a simple test that can be performed at the machine or on a benchtop. The following are examples of visual exception inspections related to lubricating oil:

Blotter Spot Test – This simple test can be extremely helpful for detecting a range of contaminants and abnormal oil conditions.

Blender Test – This test can be performed with a blender or graduated cylinder. It is useful for revealing certain contaminants, degraded oil chemistry, impaired air-handling ability and other abnormal conditions.

Inverted Test Tube – This is an old method that uses the rate of rising air bubbles to roughly estimate oil viscosity. Graduated cylinders or sample bottles can be utilized as well.

Oil Drop on the Surface of Water – Certain additives and chemical contaminants influence the interfacial tension of lubricants. Placing a couple drops of oil on the surface of water can quickly exhibit this. Compare the results to that of new oil.

Cold Oil Turbidity – Oil with trace amounts of water can be assessed by placing a sample of the oil in a refrigerator for an hour. Dissolved water will saturate in oil at cold temperatures and become visibly noticeable by a cloudy appearance.

Hot Oil Clarity – The presence of soft oil insolubles (oxides, organic materials, dead additives, insoluble additives, varnish potential, etc.) and some emulsified water will often quickly dissolve in the oil when heated. This is visibly noticed by the oil becoming markedly clearer (less turbid).

Crackle Test – This well-known test for water contamination can be performed with a hot plate or soldering iron.

Bottle and Magnet Test – The presence of ferromagnetic wear debris particles can be separated and concentrated for quick inspection by placing a strong rare-earth magnet against the outside surface of an oil sample bottle and then agitating. For high-viscosity oils, dilute the oil first with kerosene or another solvent to lower the viscosity.

Laser Pointer Test – Shining, reflective particles can be easily observed in many oils by passing a laser through the oil. The particles will scatter the light. It sometimes is best to allow the particles to settle to the bottom of the bottle first and then pass the laser light up from below.

“What are the recommended acid number limits based on ASTM D664/09a for the lube oil of a gas turbine? The lubricant being used is Shell Aeroshell 500.”

ASTM D664 (as well as the similar testing methods of D974) can be a very helpful oil analysis testing procedure for used oil to help determine the acidic constituents present within a particular lubricant sample. In order to effectively assess the acid number results obtained from this testing procedure, you must first obtain a baseline from a new oil sample. This is critical, as the acid number for the new oil may vary considerably between all types of oils.

During acid number monitoring, an increase in the value will likely indicate the rise of acidic products present. Even among various types of turbine oils, the baseline acid number will be different. For example, Shell reports that new oil testing of Aeroshell 500 is typically around 0.11 milligrams of potassium hydroxide per gram (mg KOH/g), with a new oil maximum of 1.0 mg KOH/g. While this is a good starting point, the new oil should be tested from the same batch from which the used oil is to be sampled because the actual acid number baseline will vary.

Typical warning limits for the acid number of an in-service turbine oil will be approximately 0.1 to 0.2 mg KOH/g for gas turbines with more than 3,000 hours of oil life. This may be a significant indicator of above normal degradation. An acid number increase of 0.3 to 0.4 mg KOH/g above the initial value will likely indicate that the oil is at or approaching its end of service life. This recommended warning limit falls in line with that of ASTM D4378 as well.

Even though results may vary from sample to sample, any small increase of the acid number, such as a rise from 0.15 to 0.25 mg KOH/g, should cause concern and prompt an investigation as well as more frequent oil sampling. If the warning limit of a 0.3 mg KOH/g increase or greater occurs, an inspection should be conducted to check for signs of increased sediment on filters and centrifuges. If the oil must remain within the system, there should be closer monitoring of all available indicators, including sight glasses, temperature indicators, differential pressure gauges on filters and, of course, the results of more frequent oil sampling. 

“What are the top three characteristics you look for when considering an oil analysis laboratory (i.e., turnaround time, price, quality, capabilities, etc.)?”

Selecting an oil analysis laboratory can be daunting if you don’t know where to begin. Once you make the decision to initialize an oil analysis program at your plant or to find a quality lab other than the one offered by your oil supplier, there are several important factors to consider. The following three attributes will be key to building a successful relationship with your oil analysis lab.

Quality of Testing

Many laboratories struggle to meet their customers’ expectations because of mishaps in testing procedures. A quality lab will make efforts to follow ASTM or ISO test procedures in order to maintain the utmost accuracy in analysis interpretation. Be sure to find out whether there will be any deviations to the standardized test procedures, which should be followed for all types of testing instruments. Also, do not be afraid to ask questions.

Data Interpretation

An oil analysis report is not intended to be just a sheet of paper with raw data results. These tests can be quite complicated, so it may not be easy to determine an obvious concern, let alone an inconspicuous or unusual one. The best oil analysis reports come complete with a full analysis interpretation summary. This should not be computer-generated but tailored by a specialist. The report should also have graphs that show trend data, along with a comparison to the baseline, as well as critical and cautionary limits. Finally, the report should feature a layout that is easy to understand.

Customer Service

The services an oil analysis laboratory offers should go beyond those relating simply to the oil samples. The individual in charge of receiving the reports at the plant should be in frequent communication with those who interpret data at the lab to collaborate on possible explanations for data anomalies and to obtain expert advice on determining the best course of action. The laboratory should also offer a hotline to provide quality customer service whenever you need it.

Please note that price is not included in this list, as you should expect the cost of laboratory services to remain competitive. Also, in regard to price, it is important to keep in mind that a single machine failure that is avoided through oil analysis can justify an entire year or more of the oil analysis program.

“We use a premium hydraulic oil in our hydraulic system, but copper continues to trend up. It is now at 200 parts per million. The equipment manufacturer says the only copper in the system is the tube in the heat exchanger. Could the copper be leaching into the oil?”

To confirm your suspicions, review the potential contaminants in the system. First, let’s assume that copper is a contaminant in the oil. In general, oil contaminants originate from three sources: the machine environment, machine operation and maintenance activities.

Machine Environment

Your equipment is working within a production process environment as well as an atmospheric environment. Contaminants can come from the plant’s surrounding area in the form of conventional dust, other minerals or moisture. They may also originate in the plant’s transformation process as wood, water, chemicals, metals, etc. In your hydraulic system, the source of copper could be from a mineral or material containing copper being processed in the plant.

Machine Operation

When investigating a machine’s internal contaminants, most people think only of the wear debris created during normal or abnormal operation. However, other pollutants can come from seals, gaskets or filters. Contamination may also result from a chemical reaction of the lubricant with external contaminants such as varnish, sludge, etc., or when the machine fails due to a lubrication, thermal or mechanical problem. In this case, consider the contaminants generated in the lubrication system. Heat exchangers made of copper tend to contaminate the oil with copper particles as part of their normal operation.

Oil cross-contamination is also a possibility. This may occur during storage, handling or application activities. It involves the contamination of the lubricant with another in-service lubricant, solvent or assembly oil used in the plant. This type of contamination can be identified by analyzing an oil sample in the laboratory.

The oil distributor or manufacturer’s facility is another potential source of contamination. This can be determined by conducting analysis upon receipt of the oil. With this approach, you can confirm that copper is not a typical contaminant or ingredient in your new lubricants.

Maintenance Activities

Maintenance activities are often overlooked when searching for the source of oil contamination. They generally include repairs, part replacement, welding jobs and transferring oil into the machine. Keep in mind that these mechanical tasks should be conducted in a clean environment. The oil must also be cleaned and eventually dried before introducing it into the machine. In addition, oil flushing should be performed not only to remove contaminants from a machine or oil failure but also to eliminate contaminants introduced or generated during repair activities. If the repaired component was the copper heat exchanger, it is likely that copper debris is present in the system.

By reviewing the potential contaminants that could enter your oil along with their sources, you should be able to reach the proper conclusion. Typically, copper in hydraulic systems comes from the wear of mechanical parts or from copper components such as lines and heat exchangers.

Used oil analysis is a tool, and like most tools, it can be properly used or misused, depending on the application, user, surrounding conditions, etc. A number of articles and publications explain how to interpret the information in an oil analysis report, but most fail to address one very important issue: statistical normalcy. What is “normal” in a data set represents the typical average values and expected variation within that group. It’s a matter of how to view a series of used oil analyses and how the results can shape your view of a healthy or ailing piece of equipment as well as the viability of continued lube service.

Most people have heard of the Six Sigma approach using statistics and other similar concepts. These are applicable to the world of lubricants as much as any other topic. Statistical analysis can be applied in both small and large viewpoint formats. Typically, these are referred to as micro-analysis and macro-analysis. Micro-analysis looks at one specific entity and lets data develop as inputs affect it. An example of this would be performing a series of used oil analysis tests on one engine with reasonably consistent usage patterns. All inputs (lubricant, fuel, filtration, sample cycle, etc.) are held constant or with minimal change so the natural development of information can be seen. This is done to establish ranges and to allow for any trends to develop. Over time, this methodology can be used to decide which product or process excels over another for a specific application.

It is important to note that even when experiencing extremely consistent conditional and resource inputs, there is variation, even when the process is in control. You need a considerable amount of data from this single source to define what is average and normal. This takes time, money and patience.

Macro-analysis does not look at just one entity but all those in a desired grouping. It predicts the behavior (results) of the mass population’s reaction to changing conditions (multiple inputs). With this method, you can look at a large group of data that represents a piece of equipment (engine, gearbox, differential, transmission, etc.) from different points of origin and determine what is “normal” across a broad base of applications. Macro-analysis comes much quicker because multiple sources are accepted. However, caution must be used to ensure that illogical conclusions are not drawn based upon false presumptions or in confusing correlation with causation.

Oil Miles Vehicle Miles Aluminum Chromium Iron Copper Lead
5,002 49,997 3 1 14 4 3
5,028 104,993 3 1 11 3 3
5,065 154,941 2 3 14 5 6
5,019 204,983 5 1 13 3 4
5,019 254,836 2 3 12 2 4
4,960 284,815 3 2 13 6 4
Oil Miles Vehicle Miles   Aluminum Chromium Iron Copper Lead
4,996 N/A Average 3.7 1.4 14.4 4.2 4.0
52 N/A Standard Deviation 1.3 0.6 2.1 1.7 1.5
5,151 N/A Upper Limit 7.6 3.2 20.7 9.3 8.6
5,102 284,815 Max. 6.0 3.0 18.0 8.0 8.0
    PPM per 1,000 Miles 0.7 0.3 2.9 0.8 0.8

Table 1. An example of micro-analysis for a V-6 gasoline engine

Micro-analysis of Data from a Single Engine

Table 1 is a good example of micro-analysis for a V-6 gasoline engine. Oil changes were performed religiously, the inputs were consistent and the owner was dedicated to the testing parameter protocol. The vehicle saw very typical use in its life cycle and environment, including weather, driving cycles, etc.

In this example, the data created was consistent and could be used to make a sound decision for the stated operating conditions. No abnormalities were revealed. The standard deviations were all well below the means, which was as expected and desired in a controlled micro-data set.

The vehicle went from a steady diet of a synthetic oil with a premium filter to a quality conventional oil with an off-the-shelf filter. The data shows that the average wear metals shifted less than a point after this change. All shifts were well within one standard deviation for each distinct metal.

What can be surmised from these results is that there was no tangible benefit to using the high-end products for this maintenance plan and operational pattern. Conversely, the typical quality baseline products presented no additional risk of accelerated wear. It cannot be concluded that this result would be true in all potential circumstances, only that it is true when applied to a 5,000-mile oil change interval with the given operating conditions. Significantly longer oil change intervals likely may have shown a statistical difference between the two lube/filter choices, but that was not part of the test protocol.

 

Macro-analysis of Data from Numerous Engines

The following examples of macro-analysis illustrate how mass-market data can be used. The first set of data is from a V-8 gasoline engine.

In Table 2, note the two columns for lead (Pb). One is the raw data, while the other is the same data stream with three data points removed because they were affecting the “normalcy” of the data. Most of the lead counts in all the other samples were well below 35 parts per million (ppm), but three samples had lead counts of 68 ppm, 204 ppm and 602 ppm. When the individual results were reviewed, there was no reasonable explanation as to why the lead was so high in these three reports. In Table 3, you can see how greatly those three data points were skewing the results.

Notice how the average lead count dropped more than 57 percent, and the standard deviation decreased by nearly a factor of 10. Only three samples of 548 were responsible for such an overt act of skewing the data. This is where math and common sense come together to form a reasonable conclusion that some intervention of the data is warranted. By removing only 0.5 percent of the lead data population, the range shifted significantly. This indicates that those three samples were not “normal,” and the remaining 99.5 percent were.

In macro-data, when the standard deviation is some multiple larger than the mean, there is cause to believe abnormalities are imbedded in the data stream. When the deviation is smaller, it indicates the mass-market population is representing the variability of inputs as desired and not being affected by spoilers. Unfortunately, there is no hard and fast rule. Training, experience and knowledge of the subject matter will help define and delineate when and where to intervene.

4.6L Engine Oil Miles Vehicle Miles   Al Cr Fe Cu Pb Pb’
5 years and 548 samples 5,516 94,078 Average 3.3 0.9 14.6 4.8 2.8 1.2
5,159 62,211 Std. Dev. 2.4 0.6 9.7 4.7 27.4 2.8
20,992 280,710 Upper Limit 10.4 2.7 43.6 19.0 85.0 9.7
85,372 487,625 Max. 42.0 4.0 88.0 46.0 602.0 34.0
    Per 1,000 Miles 0.6 0.2 2.6 0.9 0.5 0.2
2007: 38 samples 4,492 79,906 Average 2.7 0.6 10.2 4.9 0.4 0.4
2,602 62,244 Std. Dev. 1.0 0.6 5.9 5.6 0.7 0.7
12,297 266,637 Upper Limit 5.6 2.5 27.9 21.7 2.4 2.4
12,926 300,362 Max. 5.0 2.0 27.0 27.0 3.0 3.0
    Per 1,000 Miles 0.6 0.1 2.3 1.1 0.1 0.1
2008: 100 samples 4,687 89,521 Average 2.9 0.8 14.0 4.3 9.5 1.5
2,980 62,861 Std. Dev. 1.1 0.6 10.3 4.9 63.3 4.3
13,626 278,103 Upper Limit 6.1 2.5 44.8 19.0 199.5 14.3
20,000 452,602 Max. 6.0 4.0 68.0 40.0 602.0 28.0
    Per 1,000 Miles 0.6 0.2 3.0 0.9 2.0 0.3
2009: 94 samples 4,931 87,685 Average 2.8 0.7 12.7 4.1 1.3 1.3
3,893 64,726 Std. Dev. 1.3 0.6 8.5 3.6 2.4 2.4
16,610 281,861 Upper Limit 6.7 2.4 38.2 14.8 8.6 8.6
22,541 487,625 Max. 9.0 2.0 65.0 21.0 17.0 17.0
    Per 1,000 Miles 0.6 0.1 2.6 0.8 0.3 0.3
2010: 123 samples 5,320 96,641 Average 3.4 0.9 14.6 5.4 1.1 1.1
3,078 61,329 Std. Dev. 3.8 0.7 8.8 6.0 1.6 1.6
14,555 280,628 Upper Limit 14.7 3.0 41.0 23.3 5.8 5.8
18,186 280,817 Max. 42.0 4.0 49.0 46.0 9.0 9.0
    Per 1,000 Miles 0.6 0.2 2.7 1.0 0.2 0.2
2011: 125 samples 5,720 96,805 Average 3.9 0.9 15.9 5.0 1.5 1.5
3,409 57,271 Std. Dev. 2.4 0.6 10.4 3.7 3.4 3.4
15,948 268,620 Upper Limit 11.2 2.6 47.1 16.1 11.9 11.9
16,400 359,000 Max. 23.0 3.0 88.0 31.0 34.0 34.0
    Per 1,000 Miles 0.7 0.2 2.8 0.9 0.3 0.3
2012: 68 samples 8,157 109,594 Average 3.7 1.0 18.1 5.1 1.6 0.6
11,520 66,474 Std. Dev. 1.8 0.7 10.6 4.5 8.3 1.4
42,718 309,017 Upper Limit 9.1 3.1 50.0 18.7 26.5 4.7
85,372 351,645 Max. 12.0 4.0 57.0 31.0 68.0 9.0
    Per 1,000 Miles 0.5 0.1 2.2 0.6 0.2 0.1
       
Removed 204.0
three 602.0
samples 68.0

Table 2. A macro-analysis example from a V-8 gasoline engine

  Average Lead Standard Deviation
Full Data Set 2.8 27.4
Revised Data Set 1.2 2.8

Table 3. An example of how a few data points can skew the results

In examining the results through the years, there clearly were not any significant changes over time. For example, the average iron wear rate was reasonably consistent and varied by less than 1 part per million over five years of data. However, if you look at the iron wear in detail, a great storyline develops. When the oil was run longer, the iron went up and very predictably. In 2007, the average oil sample was taken at 4,500 miles, and the iron average was 10.2 ppm. Five years later, the average oil sample was taken at 8,100 miles, and the iron average was at 18.1 ppm. An 80-percent increase in mileage was mirrored in a resultant 80-percent increase in iron. That is a very predictable response curve; the wear is consistent.

When oil is changed frequently, a higher iron wear metal count will be seen in the oil analysis results. There are two reasonable explanations for this phenomenon – residual oil and tribo-chemical interaction. Studies have shown that elevated wear levels after an oil change can be directly linked to chemical reactions of fresh additive packages. In addition, when you change oil, no matter how much you drip into the catch basin, there is always a moderate amount left in the engine. It is estimated that up to 20 percent of the old oil remains, depending on the piece of equipment. So when you begin your new oil change interval, you are not starting at zero ppm.

Oil
Miles
Vehicle
Miles
  Al Cr Fe Cu Cu
Prime
Pb
7,261.2 100,398.8 Average 2.7 0.3 16.3 16.0 3.4 2.1
4,006.1 76,147.9 Std. Dev. 1.2 0.5 10.5 53.0 4.3 2.5
19,279.6 328,842.6 Upper Limit 6.4 1.8 47.9 175.1 16.2 9.6
28,417 843,817 Max. 8 1 75 484 34 29
    PPM Per 1,000 Miles 0.4 0.0 2.2 2.2 0.5 0.3

Table 4. A macro-analysis example from a V-8 diesel engine

Miles 3,500 7,500 11,500
Iron PPM Per 1,000 Miles 3.0 2.3 2.0

Table 5. An example of using three sub-groups to determine how the oil’s life cycle affected wear rates

While the wear rate is not greatly escalated at the front end of the oil change interval, it certainly is not lessened by frequent oil changes either. Changing your oil early does not reduce wear rates, presuming you did not allow the sump load to become compromised. When you have reasonably healthy oil, the wear rate slope is generally flat. Only after the oil becomes compromised in some manner would you see a statistical shift in wear rates. Thus, higher wear at the front of an oil change interval is plausible, but the claim of lesser wear with fresh oil is most certainly false. Those who change oil frequently at 3,000 miles are not helping their engine, and those who leave it in for longer periods are not hurting the engine.

  Al Cr Fe Cu Pb
Truck A (synthetic oil and bypass filtration) 2 1 15 4 1
Truck B (conventional oil and filter) 2 0 14 3 5
Standard Deviation 1.2 0.5 10.5 4.3 2.5
Upper Limit 6.4 1.8 47.9 16.2 9.6

Table 6. Oil analysis results for two diesel-engine trucks that were driven in similar circumstances but with different engine oils and filters

The oil analysis results from this example showed that engine wear was generally unaffected by operational conditions and oil change intervals. It was also concluded that the filtration selection, oil brand and grade, as well as various service factors did not have much of an influence on the results. For this engine, it didn’t make much difference what oil was used or how it was driven.

The next set of data in Table 4 is from a V-8 diesel engine. These oil analysis samples represent fairly high-mileage vehicles, with 179 of the 527 samples from vehicles with more than 100,000 miles and many others from vehicles with more than 250,000 miles.

Once again, there is a need to manipulate the data to remove abnormalities. Forty-one samples had ultra-high copper (Cu) counts, with many readings more than 100 ppm and some more than 300 ppm. Therefore, a separate “copper prime” column was created to root out the high flyers. Although some might decry the removal of data, you can clearly see how these spikes can adversely affect what is deemed “normal.” While 41 samples may seem like a large amount of data to remove, they represent only 7.7 percent of the total population, and yet their removal resulted in nearly a 79-percent drop in the “average” copper magnitude (from 16 to 3.4 ppm).

To determine how the oil’s life cycle affected wear rates, three sub-groups were examined: 3,500 miles, 7,500 miles and 11,500 miles. Again, higher iron wear rates were revealed toward the front of the oil change interval (see Table 5).

In no way does this mean that an engine is being harmed, but it directly contradicts the mantra that more is better (“more” being more frequent oil changes and “better” being less wear). At some point the iron wear rate will begin an ascent and probably become parabolic, but that is farther down the road than most people think. What is clear is that you can change your oil early, but it will not reduce your wear rate. You can also put off your oil change for a long time (at least to 12,000 miles), and it generally will not affect your wear rate.

Defining What is Normal

Table 6 illustrates how macro-analysis can be used to determine what is normal in separate cases. Two diesel-engine trucks were driven in very similar circumstances for the same length of time. Both trucks pulled heavy recreational vehicles into the mountains for roughly 6,500 miles and experienced heat and cold patterns that were comparable to each other. However, there was a significant difference: one vehicle was run on premium synthetic 15W-40 engine oil and utilized bypass filtration, while the other truck used conventional 10W-30 engine oil with a normal filter. Below are the oil analysis results in regard to wear for both trucks.

Did either truck perform better than the other? Without true micro-analysis, you could not make such a determination. Iron is the greatest indicator of cumulative wear, and these samples were right at average levels. At face value, one might claim the synthetic oil did better because the lead value was lower in truck A and higher in truck B, but they are both well within the typical variance. Ironically, the chromium, iron and copper levels were higher in the truck using synthetic oil and bypass filtration, but again these amounts were well within the normal variation.

It can be expected that wear metal counts will bounce up and down from one sample to the next. It is also normal for metals to vary in mass populations and in individual units. However, when you can see a single sample well within mass population “normalcy,” you can deduce that it is performing no better or worse than any other unit using any other fluid/filter combination.

The slight variation that occurred was the expected normal variation due any engine in this family. Two vastly different inputs (lubes and filters) did not result in any significant difference under nearly identical operating conditions at the same duration exposure. So in these two examples with very similar operational circumstances and conditional limitations, there was no tangible benefit whatsoever to using the high-end products.

Contrasting Micro-analysis and Macro-analysis

Unlike micro-analysis, macro-analysis does not allow for any conclusion to be drawn as to what product(s) might be better or worse than any other in the grouping. When a sample is within one or two standard deviations of average, thereby defining itself as normal, you can only conclude that the events and products that led to that unique data stream were also normal. Any variance is not due to one particular product or condition but the natural variation of macro-inputs. Therefore, you cannot say that brand X was better than brand Y or brand Z because typical variation is in play. Only with micro-analysis, using long, well-detailed controlled studies, can you make specific determinations as to what might be better or best for an application.

With macro-analysis, if two separate samples are both within the standard deviation, the separate conditions and products did not manifest into uniquely different results. When viewed within an engine family, if engine A is compared and contrasted to engine B, and the two engines used different oils but resulted in similar wear metal counts and rates, you can conclude that neither oil was better than the other. When the results are within one standard deviation, the proof is conclusive that neither product had an advantage over the other. Essentially, under these conditions, you cannot say that either choice is better, but you can say that neither is better.

Keep in mind that standard deviation data can be large or small, depending on your definition of large and small. For a frame of reference, when the standard deviation is more than 50 percent of the average magnitude, many consider this to be large. However, this does not preclude it from being “normal,” as defined by happening with great regularity and having no adverse successive effects.

Conclusion

In conclusion, used oil analysis is a great tool, but you must understand how to properly manipulate the data and interpret the results. You must know not only the averages but also if there are any abnormalities embedded in those averages and how large the standard deviation is. Unfortunately, you’ll never know how many abnormalities are present, nor if they have been pre-screened for you, because most oil analysis services do not perform this extra filtering. You can take solace in the fact that if your results are near or less than “universal average,” you’re probably in good shape. You are, in essence, “normal.”

While most engine oils are made to acceptable standards, their general and specific qualities can vary widely. Poor-quality engine oils are often put on the market due to ignorance or greed. Unfortunately, for the uninformed automobile owner, a high-quality engine oil and one of poor quality will look and feel the same.

Engine and Bench Tests

The engine has always been the ultimate platform for identifying the required quality of its oil. Even as engine design has changed to meet performance, fuel efficiency and environmental standards, the engine continues to be the ultimate arbiter of oil quality. However, using the engine to measure oil quality in dynamometer tests can be an expensive proposition. Even so, to help control warranty costs, the development and use of engine tests is unavoidable for engine manufacturers when determining the oil quality needed for a particular design or component.

Although necessary, generating repeatable dynamometer tests for an engine can be challenging. As engine design has progressively increased power from smaller engines, the difficulty of establishing repeatable dynamometer tests has grown even more rapidly. Fortunately, once the quality level has been determined on the dynamometer or in the field, there is a much less expensive approach that can be applied to more precisely appraise the oil quality. This involves using laboratory bench tests designed to correlate closely with engine dynamometer tests or field experience. These bench tests have the capability of providing a relatively inexpensive measure of oil quality. However, the value and significance of this type of test is dependent on a number of factors, including identification of the engine’s specific needs, clear and consistent information from the engine either in dynamometer tests or field experience, and an understanding of the relationship between the engine’s needs and the oil’s physical and/or chemical properties.

 Engine Oil Properties

To serve the engine, oil must possess certain physical and chemical properties. During the oil’s service, the engine generates a number of operating stresses that adversely affect the long-term ability of the oil to function at a consistently high level. Service conditions may also vary widely depending on the environment and the way the vehicle is used. Consequently, choosing an engine oil to meet particular service needs and conditions requires knowledge of several important oil properties, including viscosity.

Viscosity

Viscosity may be defined as a fluid’s resistance to flow. Because a fluid’s molecules are somewhat attracted to one another, energy is required to pull them apart and create flow. In general, larger molecules have more attraction between them and a higher viscosity. The energy required to overcome this molecule-to-molecule attraction and produce fluid flow can be considered a form of friction. Therefore, viscosity can be defined as a form of molecular friction. Of all the engine oil’s physical and chemical qualities, its viscosity and viscometric behavior during use are often considered the most important.

Viscosity and Wear Prevention

This same molecular friction prevents the oil from escaping too quickly when two engine surfaces in relative motion are brought closely together under pressure. This inability of the intervening oil to escape quickly and its level of incompressibility hold the two surfaces apart and prevent wear, a process that is termed hydrodynamic lubrication. The higher the viscosity, the greater the attraction of the oil molecules and the greater the wear protection.

Viscosity Classification

A lubricant’s viscosity has always been associated with wear protection. Early in its history, SAE recognized viscosity as important to engine function and instituted the J300 classification system, which establishes viscosity levels for engines by a series of grades. These grades are defined by viscosity levels in one or two temperature zones. Today, the grades are set for engine operating temperatures and for winter temperatures at which the oil affects starting and pumping.

Viscosity at Operating Conditions

In the early years of automotive engines, oils were simply formulated and obeyed Newton’s equation for viscosity – the more force used to make the fluid flow (shear stress), the faster it would flow (shear rate). Essentially, the ratio of shear stress to shear rate – the viscosity – remained constant at all shear rates. The engine oils of that time were all essentially single grade and carried no SAE “W” classification.

This viscometric relationship changed in the 1940s when it was discovered that adding small amounts of high-molecular-weight polymers appeared to give the oil the desired flow characteristics for both low-temperature starting and high-temperature engine operation. Accordingly, these polymer-containing oils were listed by the SAE viscosity classification system as multigrade engine oils, as they met the requirements of both viscosity temperature zones.

Since that time, multigrade oils (e.g., SAE 10W-40, 5W-30, 0W-20, etc.) have become very popular. However, they were no longer Newtonian in flow characteristics, as the viscosity was found to decrease with increasing shear rate. This was considered important in lubricating engines that operated at high shear rates (as measured in millions of reciprocal seconds), in contrast to the several hundred reciprocal seconds of the low-shear viscometers then being used to characterize engine oils.

 High Shear Rate Viscometry

Consequently, the need arose to develop a high shear rate viscometer to reflect the viscosity in engines under operating temperatures. In the early 1980s, an instrument and a technique were developed that could reach several million reciprocal seconds at 150 degrees C as well as exert high shear rates at other temperatures on both fresh and used engine oils. The instrument was called the tapered bearing simulator viscometer. The technique was accepted by ASTM as test method D4683 for use at 150 degrees C (and more recently as D6616 for use at 100 degrees C). This critical bench test of engine oil quality became known as high temperature, high shear rate (HTHS) viscosity. Minimum limits were then imposed for various grades in the SAE viscosity classification system.

Interestingly, it was later shown that this instrument was unique and basically absolute in providing measures of both shearing torque or shear stress and shear rate while operating. It is the only known viscometer capable of doing this.

Viscosity and Oil Gelation at Low Temperatures

Multigrade engine oils were originally introduced to reduce oil viscosity at low temperatures to aid in engine startup. This important benefit was immediately apparent, and multigrade oils have since become the most popular form of engine lubricant around the world.

With easier engine startability at low temperatures, another problem became evident – oil pumpability. This was a considerably more serious issue, as lack of oil pumpability could destroy the engine. In cold-room dynamometer tests, it was determined that there were two forms of the pumpability problem. The first was simply related to high viscosity and called flow-limited behavior. The second was less obvious and involved the gelling of the oil under a long, deep cooling cycle. This was labeled “air-binding,” since the oil pump became air-bound as the result of a column of oil being pulled from the sump and the oil not filling this void, as shown in Figure 1.

This knowledge and bench test, which initially seemed to predict both forms of failure, were not enough. In the winter of 1979-80 in Sioux Falls, South Dakota, a cooling cycle showed that air-binding could occur under relatively mild cooling conditions. Over a 24-hour period, a number of engines containing oil were ruined. The cooling cycle had produced a condition in which the oil became air-bound. The costly incident revealed the need for a more sensitive bench test that would accurately predict the tendency of air-binding pumpability failures.

Gelation Index

The air-binding engine oil that caused the Sioux Falls failures provided a solid case study. A new bench test instrument and technique were developed to indicate any tendency of the test oil to gelate. The technique, which involved continuous low-speed operation of a cylindrical rotor in a loosely surrounding stator, was immediately incorporated into engine oil specifications and later became ASTM D5133. This not only showed the oil’s tendency to become flow-limited but also specified the degree of gelation that might occur over the measured temperature range (typically minus 5 to minus 40 degrees C). The parameter was called the gelation index. Today, engine oil specifications for multigrade oils require a maximum gelation index of 12.

Viscosity and Energy Absorption

As beneficial as viscosity is to the engine in preventing wear through hydrodynamic lubrication, it also has some negative aspects that can affect the engine’s operating efficiency. The oil’s molecular friction, which separates two surfaces in relative motion, requires energy to overcome it. This is a significant amount of energy from the engine in exchange for the provided wear protection. Therefore, careful formulation of the oil viscosity is critical to vehicle owners and to governments mandating fuel economy limits. Lowering oil viscosity can be an important step in reducing viscous friction to improve fuel efficiency. Interestingly, over the last several years, there has been an increase in the number of automobiles operating with engine oils that have lower viscosity levels, thus markedly improving their engine efficiencies.

A decade ago, the lowest SAE viscosity grades were SAE 0W-20 and 5W-20 oils, with SAE 20 carrying the minimum high shear rate viscosity of 2.6 centipoise (cP) to simulate engine operation at 150 degrees C. Figure 2 shows data from engine oils sold in North and South America as well as for SAE 5W-30 engine oils.

Japanese automakers have recently called for even lower viscosity grades. As a consequence, the SAE has introduced three new operating grades identified as SAE 16 (2.3 cP minimum at 150 degrees C), SAE 12 (2.0 cP minimum at 150 degrees C) and SAE 8 (1.7 cP minimum at 150 degrees C). These grade requirements are also shown in Figure 2 for comparison. None of these lower grade oils has yet to reach the market for analysis. Since viscosity is directly related to the amount of energy expended by the engine for wear protection through hydrodynamic lubrication, such a decrease in viscosity would be expected to have important benefits in fuel efficiency but only in engines designed for their use.

 Viscosity-dependent Fuel Efficiency Index

Given the influence that oil viscosity has on the engine, a technique was developed to calculate the effects of engine oils on fuel efficiency. To be meaningful, the viscosity values had to be obtained at the high shear rates associated with operation in specific sections of the engine.

Earlier dynamometer work had identified the percentage of friction and operating temperature of the five main lubricating sites in a reciprocating gas-fueled engine responsible for nearly all efficiency loss. This information was used to develop the viscous fuel efficiency index (V-FEI) parameter. With this value, which ranges from 0 to 100, the higher the V-FEI of a given engine oil, the less energy is lost to viscosity, and consequently, the more fuel efficient the engine is. Although different engine designs may have different levels of friction in the essential lubricating areas, use of this friction data provides a comparative value for engine oils.

Figure 3 shows the average value of SAE 0W-20 and 5W-30 engine oils from the North and South American markets from 2008 to 2014. For comparison, the average V-FEI for SAE 0W-20 and 5W-30 in an earlier study was 46 and 47 respectively.

As expected, it was determined that the yearly averaged multigrade SAE 0W-20 oils contributed more fuel efficiency to the engine than did the averaged multigrade SAE 5W-30 oils because of the viscosity differences shown in Figure 2. With the exception of 2012, the increase in V-FEI is equivalent to nearly 7 to 8 percent in viscosity-dependent fuel efficiency. The decrease shown in the average fuel efficiency of SAE 0W-20 engine oils collected in 2012 may indicate the development of formulations meeting automakers’ concerns that the benefits of hydrodynamic lubrication will not be lost in efforts to improve fuel efficiency.

Engine Oil Volatility

Another aspect to consider when reducing the viscosity in engine oil formulations is that such a reduction is most frequently obtained by using base oils with higher volatility. Volatized oil reduces the amount of lubricant serving the engine and may carry exhaust catalyst-contaminating components, negatively affecting the catalyst’s smog-reducing ability. The oil remaining after the loss of the more volatile components will also be more viscous and energy-absorbing.

Figure 4 shows the response of two of the most volatile multigrade engine oil classifications. Also shown is the specified maximum volatility set by the International Lubricant Standardization and Approval Committee (ILSAC). In the last few years, it is evident that the SAE 0W-20 and 5W-30 classification categories were designed to meet the ILSAC volatility specification by a comfortable margin. These results suggest that volatility control may be less demanding with the more recently classified multigrade oils identified as SAE 0W-16, 0W-12 and 0W-8.

Phosphorus Emissions and Volatility

Soluble phosphorus compounds such as zinc dialkyldithiophosphate (ZDDP) have been used in formulating engine oils for many years. These anti-wear and antioxidation compounds have provided considerable support to the design of modern engines.

In the mid-1900s, the reciprocating engine was identified as a major contributor of air pollution. Unburned or partially burned hydrocarbons from the engine exhaust were modified by sunlight into noxious gaseous hydrocarbons, which produced smog in some large cities. As a consequence, exhaust catalytic converters were developed in the 1970s to treat the exhaust gas and convert it into carbon dioxide and water. Unfortunately, in the years following the catalytic converter’s development, it was discovered that certain elements in gasoline or engine oil, including phosphorus and sulfur, would deactivate the catalyst by coating it. This ultimately led to restrictions on the quantity of these chemicals in engine oil and fuel.

Phosphorus Emission Index

The Selby-Noack volatility test was developed in the early 1990s as a better and safer approach for determining engine oil volatility. It collected the volatile component of the volatility test for further analysis, which was helpful in detecting phosphorus and sulfur. In the first analyses of volatiles collected from the bench test, it was apparent that the phosphorus additives in the engine oils were also producing phosphorus through additive decomposition. On the basis of these findings, a parameter related to the amount of phosphorus released during the test was developed called the phosphorus emission index (PEI).

Figure 5 shows the change in PEI over the last eight years. It is evident that considerable progress has been made in reducing the phosphorus decomposition and/or volatility of these two multigrade SAE classifications. The reduction of the PEI to 6 to 10 milligrams per liter of engine oil is a significant change in protecting the catalytic converter from the effects of phosphorus.

With the trend toward smaller, fuel-efficient and turbocharger-equipped engines generating higher temperatures during operation, a bench test that can reveal an oil formulation’s phosphorus emission tendencies would be useful in designing lubricants best suited to the engine and the environment.

Phosphorus Content and Volatility

How much influence the phosphorus in an engine oil has on the amount of phosphorus volatilized during engine operation is an important question affecting the choice of additives in oil formulation. Figure 6 shows the phosphorus content in a number of SAE 0W-20 and 5W-30 engine oils vs. the PEI values obtained. The data reveals that phosphorus volatility generated by the Selby-Noack test is virtually unrelated to the amount of phosphorus present in the oil as an additive. The lack of correlation between the phosphorus in the engine oil and the amount of phosphorus volatilized is evident in the low correlation coefficient (R²) values. This parameter would be near a value of one if phosphorus concentration affected its volatility. As shown in Figure 6, the values obtained from the data are much lower, with R² at 0.05 for SAE 0W-20 and 0.17 for SAE 5W-30 engine oils.

The PEI data are primarily clustered at values from 2 milligrams per liter to about 30 milligrams per liter. However, a small number of PEI values exceed 40 milligrams per liter. These engine oils are likely to be more harmful to the exhaust catalyst. However, as has been shown in Figure 5, PEI levels have been decreasing markedly over the last few years.

Without question, the quality of engine oils will play a much greater role in the smaller, more powerful turbocharged engines that are entering the automotive market. However, it is essentially impossible to establish the quality of an engine oil by appearance. This determination can only be made by using the oil or pre-testing it. Obviously, the latter is the much preferred option for automobile owners, who have a significant investment in and need for a well-functioning and durable engine.

Engine Oil Database

Thirty years ago, based on concerns expressed by engine manufacturers about the quality of some oils, the Institute of Materials (IOM) began to compile an engine oil database. Engine oils were collected directly from the market and analyzed by selected laboratories through a series of bench tests. The results were then published. The database, which is available at www.instituteofmaterials.com, now covers more than 14,000 engine oils worldwide.

  1. Why an Oil’s Base Number May Vary

 “Our plant owns many generator sets. All of the engines are almost the same age. I noticed that some engine oils have experienced a rapid drop in the base number. For example, the base number of some engine oils reached 50 percent of their initial value just after 500 hours, while other oils are still at an acceptable level after 1,000 hours. The engines use the same oil and the same combustible. I think the decrease in the base number depends on the consumption of combustibles and lubricants (proportional to fuel or gas consumption and inversely proportional to the consumption of lubricants). Have you ever experienced such a situation? What else can explain such a reduction of the base number?”

In normal conditions, a reduction of the base number is expected due to the oil’s additives operating in the machine. These help to keep the engine cleaner and neutralize acids formed in the combustion chamber. If your lubricants, fuel, engine models and operating conditions are consistent across the fleet but different additive depletion rates are reported, you should investigate the units involved.

For example, be sure the same laboratory is used for all oil samples as well as the same test method. While lab test results generally are reliable, there is a natural variation accepted for each method. This means two or more tests may be run for the same sample with slightly different values reported. Do not disregard the possibility of an error being made when the test is performed. If you have doubts, ask the lab to confirm the results by conducting another test.

The lubricant consumption rate should also be considered. The higher an engine’s oil consumption rate, the higher the oil make-up rate, which results in an increase in the oil’s base number value.

Also, keep in mind that oils are manufactured within certain specifications, which have minimum and maximum values. Check to see if the oil’s base number is different from one batch of lubricant to another.

If fuel dilution is occurring in the engine due to fuel leaks or incomplete fuel burning, it can also reduce the base number concentration of the in-service oil.

In addition, you should evaluate the oil change interval. The more extended the oil change interval, the lower the base number tends to be when the oil is changed.

Finally, if the unusual base number patterns are seen in just a few units, analyze what is different about them. Likely, only one or two factors are involved. However, if the pattern is occurring randomly across the fleet, it may be the sum of many factors.

“Our plant owns many generator sets. All of the engines are almost the same age. I noticed that some engine oils have experienced a rapid drop in the base number. For example, the base number of some engine oils reached 50 percent of their initial value just after 500 hours, while other oils are still at an acceptable level after 1,000 hours. The engines use the same oil and the same combustible. I think the decrease in the base number depends on the consumption of combustibles and lubricants (proportional to fuel or gas consumption and inversely proportional to the consumption of lubricants). Have you ever experienced such a situation? What else can explain such a reduction of the base number?”

In normal conditions, a reduction of the base number is expected due to the oil’s additives operating in the machine. These help to keep the engine cleaner and neutralize acids formed in the combustion chamber. If your lubricants, fuel, engine models and operating conditions are consistent across the fleet but different additive depletion rates are reported, you should investigate the units involved.

For example, be sure the same laboratory is used for all oil samples as well as the same test method. While lab test results generally are reliable, there is a natural variation accepted for each method. This means two or more tests may be run for the same sample with slightly different values reported. Do not disregard the possibility of an error being made when the test is performed. If you have doubts, ask the lab to confirm the results by conducting another test.

The lubricant consumption rate should also be considered. The higher an engine’s oil consumption rate, the higher the oil make-up rate, which results in an increase in the oil’s base number value.

Also, keep in mind that oils are manufactured within certain specifications, which have minimum and maximum values. Check to see if the oil’s base number is different from one batch of lubricant to another.

If fuel dilution is occurring in the engine due to fuel leaks or incomplete fuel burning, it can also reduce the base number concentration of the in-service oil.

In addition, you should evaluate the oil change interval. The more extended the oil change interval, the lower the base number tends to be when the oil is changed.

Finally, if the unusual base number patterns are seen in just a few units, analyze what is different about them. Likely, only one or two factors are involved. However, if the pattern is occurring randomly across the fleet, it may be the sum of many factors.

Over the past 15 years, it has been stated numerous times that new oil is not clean oil, and yet while visiting 12 different plants during the last six months, I discovered that not a single one of them was sampling lubricants upon receipt. Since many of these organizations filter their oils before placing them into service, they probably think this additional step doesn’t matter. However, though filtering oil will remove dirt and particles, there is so much more that could be wrong with the oils you are putting into your machines. For the sake of your plant’s reliability, please read this article and heed the recommendations that it offers.

What Is Your Acceptable Quality Limit?

In statistical process control, the term “acceptable quality limit” (AQL) refers to the worst tolerable process average that is still considered acceptable. According to Wikipedia, this is “a test and/or inspection standard that prescribes the range of the number of defective components that is considered acceptable when random sampling those components during an inspection.” These defects generally fall into three categories: critical, major and minor. The manufacturer usually determines which defects fall into which category.

What are your product quality controls? Is your AQL 95, 97 or 99.5 percent? Consider that the world’s largest oil producer has reported production rates of 241,668,000 gallons of oil per day. Even with a 99.9-percent AQL, this means that 241,668 gallons of oil produced daily would have some sort of defect. Over the course of a year, this would total more than 88 million gallons of defective oil. While I’m not alleging that oil companies are producing millions of gallons of oil that is out of specification, it is a possibility. Of course, you can’t know for certain if you don’t sample and test your oil upon receipt.

Sampling and Testing New Oils

Hopefully, you now understand why you should be sampling and testing new oils, but how can you do this? Most of the issues that occur are related to the oil’s viscosity, as opposed to the base oil type or additive mixtures, but this does not discount the benefit of a full quality test slate. Let’s begin by discussing the simplest tests that can be performed and then move to the more complex.

Viscosity

A viscosity comparison is one of the easiest tests to perform. Many viscometers can also provide quick results, which is important since the delivery person will not be willing to wait around for a long time while you sample and test the oil. Although it would be obvious if you received an ISO 220 gear oil instead of an ISO 32 hydraulic fluid, can your eyes tell the difference between an ISO 32 and an ISO 46 or 68? Granted, moving up a grade may not have much of an impact on the equipment’s operation, but going down a grade or two most certainly will. While you may not be able to distinguish between the two different viscosities, I can assure you that your equipment will.

The chart at the bottom of this page shows how a lubricant’s film thickness increases by 62 percent when the viscosity is doubled. The reverse also applies. If you cut the viscosity in half, you reduce the film thickness by a comparable amount.

Particle Counting

A particle count is another easy test to conduct prior to the acceptance of a lubricant. Again, there are many simple-to-use, quick and fairly accurate particle counters available on the market. This test can give you a good idea of how much filtration will be needed prior to adding the new oils to your equipment.

Offsite Testing

The viscosity comparison and particle count tests can and should be performed onsite prior to accepting a lubricant delivery, as they can quickly reveal if something is wrong. However, neither of these tests provides a true indication that the product in the container matches what is on the label. To accurately determine this, a series of tests must be conducted. In most cases, this level of testing will require the sample be sent offsite to a laboratory.

 Trust But Verify

Some lubricant suppliers offer oil analysis as part of their services. However, I warn you to not allow the fox in the henhouse. Although many who provide testing are honorable and do a good job, unfortunately some do not. If you send a sample from a new drum of oil to your supplier, it is in their best interest to “confirm” that drum is good. Likewise, if you send a sample of in-service oil, it is in the supplier’s best interest to “determine” that it is bad and in need of changing out.

While serving in the U.S. Navy, I learned a tenet that Ronald Reagan was famous for saying: “Trust but verify.” This expression applies here as well. I would suggest having a third-party lab on standby to help keep your supplier honest. Again, most are decent and honest, but how will you know unless you follow Reagan’s advice? You don’t have to send every sample to an independent lab for verification, just enough to feel confident that the supplier is providing trustworthy analysis. I would also recommend visiting the supplier’s warehouse and laboratory if possible.

How to Draw a Proper Oil Sample

Proper oil sampling will be essential for your oil analysis program to be effective. The question then becomes how can you draw a representative sample. The procedure below outlines the best practice for drawing a sample on a static container.

Preparation

Confirm that the port identification plaque corresponds to the work order. Next, remove the plug from the tank opening and clean the exposed ports.

Hardware Flushing

Insert one end of the new nylon tubing into the tank and the other end into the vacuum pump. Do not tighten the knurled nut on the sampler to allow air to vent during sampling. Loosely thread on the purge bottle. Purge 10 times the estimated dead volume by pumping the vacuum pump. Loosen the knurled nut to stop flow and remove the flush bottle.

 Sample Bottle Preparation

Open the sampling bottle. Tightly thread the sampling bottle onto the sampling pump (the nylon tubing end must puncture the bag).

 Sampling

Extract the oil sample by pulling the vacuum pump handle. Fill the bottle no more than three-fourths full. Stop the oil flow by loosening the knurled nut to break the vacuum. Extract the tubing from the tank

 Labeling

Unthread the sampling bottle from the vacuum sampler and tightly secure the cap without opening the plastic bag. Write the required data on the label and attach it to the sampling bottle if not completed previously.

 

 Cleaning

Detach the tubing and discard it. Clean the sampling pump and place it in a plastic bag. Wipe clean and reinstall the dust cap on the sampling valve. Wipe up any fluid that may have spilled on the machine. Dispose of the purged fluid, nylon tubing and any used lint-free cloth in accordance with the plant’s environmental policy.

Incoming Oil Test Slate

The following is a recommended slate of tests for incoming oil:

  • Viscosity at 40 degrees C (ASTM D445)
  • Viscosity at 100 degrees C (ASTM D445)
  • ISO particle count (ASTM D7647)
  • Acid number (ASTM D664, D2896, D974, D3339)
  • Karl Fischer moisture (ASTM D1744 or D6304)
  • Elemental spectroscopy (ASTM D5185, D6595)
  • Fourier transform infrared (FTIR) spectroscopy (ASTM E2412)

Additional Tests by Fluid Type

Compressor, Gear, R&O and Turbine Oils

  • Color (ASTM D1500)
  • Foam stability/tendency (ASTM D892)
  • Demulsibility (ASTM D1401, D2711)
  • Linear sweep voltammetry (ASTM D6810, D6971)

Hydraulic and Motor Oils

  • Varnish potential (ASTM D7843)
  • RPVOT (ASTM D2272)

Hold Suppliers Accountable

It is critical to your oil analysis program that you sample and test oils upon receipt. The possibility of receiving the wrong oil or lubricants that do not meet the required specifications is very real. Consider how uncomfortable you become when one of your customers is delivered the wrong product or one that is of poor quality. What are the costs involved in getting that product back and replacing it? What are the hidden costs in the damage to your relationship with your customer? How many times can you make that mistake before it affects your reputation and business? Shouldn’t you hold your vendors and suppliers to the same standard? Remember, they are not responsible for the reliability and uptime of your machines – you are. To fulfill this responsibility, you must ensure that you receive clean, quality lubricants for your equipment.

Dirt is not your oil’s only problem

There’s a reason your oil container has an expiration date. Particles and dirt compromise an oil’s benefits, but so does age. Additives separate within oil, viscosity changes, etc.

Most oil, taken clean from the can or drum, contains varying amounts of particulate matter. Certainly, any oil intended for diluting samples must be carefully filtered. Good results can be obtained by filtering through a surface-type filter with a pore size of about 0.4 microns. The damage to machinery resulting from clean oil contaminants depends mainly on how hard the contaminant particles are, how large they are, their quantity, and, of course, how critical the application is.

If your machines must perform at very high or low temperatures, the answer is yes.

Advantages of Synthetic Base Oils

Petroleum-based mineral oils function very well as lubricants in probably 90 percent of industrial applications. They are cost-effective and provide a reasonable service life if used properly but have some limitations, depending upon the specific type of base stock used, the refining technology, the type and level of additives blended, and the operating conditions encountered. The main service difficulties within mineral oils are:

1. The presence of waxes, which can result in poor flow properties at low temperature.

2. Poor oxidation stability at continuously high temperatures, which can lead to sludge and acid buildup.

3. The significant change in viscosity as the temperature changes, which can cause the base oil to thin excessively at high temperature.

4. A practical maximum high-temperature application limit of about 125 degrees C (250 degrees F) above which the base oil oxidizes very rapidly. It is desired to keep mineral oil-based lubricants within the operating range of 40 to 65 degrees C (100 to 150 degrees F).

Synthetic base oils are expensive because of the processing involved in creating these pure chemical base oils. Their use must justify the additional cost. There should be a financial benefit to using them.

With regard to their chemical purity, think of the analogy of a container of balls. Mineral oil would be like having the container filled with many different balls of different shapes and sizes, such as footballs, baseballs, tennis balls, ping-pong balls, soccer balls, golf balls, etc. Mineral oils contain thousands, if not millions, of different chemical structures (molecules). A synthetic oil would be the equivalent of having the container filled with just one type of ball (tennis balls). Every structure in the container of synthetic oil is almost identical to the structure beside it.

The two main advantages of synthetic oils are their ability to outperform mineral oils at high operating temperatures (above 185 degrees F) and at low operating temperatures (below 0 degrees F). There are other potential advantages, too.

Depending on the type of synthetic, other advantages of synthetic lubricants (beyond the high- and low-temperature advantage) may include:

• Improved energy efficiency (less than 1 percent) due to better low-temperature properties

• Higher oil film strength with some synthetics

• Extended warranties by some equipment manufacturers

• Lower engine hydrocarbon emissions

• Extended drain intervals in some (clean) applications

• Biodegradability with some synthetics (esters)

• Natural detergency

• Higher viscosity index

• Fire resistance (phosphate esters)

Unearth the benefits of GG Friction Antidote – An investment that pays off and your benefits at a glance:

Innovative tribological solutions are our passion. We’re proud to offer unmatched friction reduction for a better environment and a quick return on your investment. Through personal contact and consultation, we offer reliable service, support and help our clients to be successful in all industries and markets.

Profitability:

Switching over to a high-performance lubricant pays off although purchasing costs may seem higher at first, less maintenance and longer vehicles/machinery parts lifecycle may already mean less strain on your budget in the short to medium term.

Continuous production processes and predictable maintenance intervals reduce production losses to a minimum. Consistently high lubricant quality ensures continuous, maintenance-free long-term lubrication for high plant availability. Continuous supply of fresh GG Friction Antidote treated lubricant to the lubrication points keeps friction low and reduces energy costs.

Safety:

Longer lubrication intervals reduce the frequency of maintenance work and the need for your staff to work in danger zones. Lubrication systems can therefore considerably reduce occupational safety risks in work areas that are difficult to access.

Reliability:

GG Friction Antidote treated lubricants ensure reliable, clean and precise lubrication around the clock. Plant availability is ensured by continuous friction reduction of the application. Lubrication with GG Friction Antidote treated lubricants help to prevent significant rolling bearing failures.

INSTANT ROI FOR OPTIMIZING YOUR LUBRICATION REGIMEN

How many kilometers do you travel monthly?

How many hours do you clock monthly?

How many litres of fuel do you consume monthly?

What’s the cost of fuel to you monthly?

How many kilometers or hours do you run per oil change?

How many litres of oil do you consume per oil change?

What’s the cost of oil to you monthly?

What’s the cost of oil filters per oil change to you?

What’s the cost of grease to you monthly?

What’s the cost of fuel filters per oil change to you?

What’s the cost of air filters per oil change to you?

What’s the average frequency of vehicle/machinery replacement to you?

What’s the cost of vehicle/machinery replacement to you?

Would you like to lower your operating costs, improve uptime and increase your company’s profits?

Let’s do the math together … Learn more >>

The information in this literature is intended to provide education and knowledge to a reader with technical experience for the possible application of GG Friction Antidote. It constitutes neither an assurance of your vehicle/machinery optimization nor does it release the user from the obligation of performing preliminary tests with GG Friction Antidote. We recommend contacting our technical consulting staff to discuss your specific application. We can offer you services and solutions for your heavy machinery and equipment.