A Brief History of Precision Machining

Precision machining has been around for over a hundred years, dating back to the industrial revolution which allowed manufacturers to produce extremely accurate parts. This precision machining technology enabled the Victorians to produce sophisticated farming equipment, ships and machines. One of the main benefactors was Isambard Kingdom Brunel, the legendary mechanical and civil engineer who built docks, bridges, ships and railways.

Whilst Brunel had to make do with unreliable calipers and heavy machinery to build his masterpieces, today’s precision machining workplace is extremely different. Technology has changed to the point where it’s unrecognisable from the Victorian era, the introduction of computers, water cutters and lasers make a work shop more like the base of operations of a Bond villain than an industrial factory.

Lasers are used as both incredibly accurate measuring tools and also for cutting and shaping the metal. Laser cutters work by heating or melting the material until it forms the correct shape whereas water cutters direct a high pressure jet that slices through leaving behind a high quality finish without damaging the structure.

Computers which control the lasers have been perhaps the biggest breakthrough. Quality precision machining means following extremely accurate and specific blueprints made by either CAD (computer aided design) or CAM (computer aided manufacturing) programs and Computer Numerical Control (CNC) machines. The programs produce complex 3D diagrams or outlines that are used to manufacture items such as tools, machines, winches or any other objects required by the customer. By using CNC machines, the process is more controlled and accurate and therefore helps to achieve better results when machining metals. These days, it’s the use of these programmes throughout the process that help make machining so precise and accurate. As computers are used throughout the process in the design stage and the machining stage to create extremely detailed and accurate items and parts, the former threat of human error is now removed.

CNC and modern precision machining first appeared shortly after the Second World War as a result of the aircraft industry’s desire to produce more accurate and complex parts. The ability to create these parts reliably and efficiently helped nations rebuild after the devastation of the war.

The cost of precision machining has also fallen over the years, now that designs can be saved on computer, the cost of setting up the process is reduced enabling skilled machine workers to produce high quality items cheaply and efficiently. This ensures that precision machining technology has a long life ahead of it as it continues to deliver large volumes of precision parts at low prices. It is likely that more and more products, especially computers, phones and tablets will be made with this machinery, rather than the plastics traditional used which are prone to breaking, for example MacBooks are milled from a solid block of aluminium.

All in all, the process of precision machining has changed beyond recognition from when it was first invented but it’s principles and the legacy that it’s built on has given rise to the latest advance in digital technology; 3D printing.

The History And Uses Of Wire Rope

Companies that transport materials rely on their equipment in order to get the job done. Although this is not unusual in any business, the difference is that they have to have the right equipment in order to safely and efficiently get the job done, whether it’s hauling wood, metal or any other heavy material. Actually this is true even if the company is transporting lightweight materials. The bottom line is that securing the cargo is the most important consideration.

In this case, there are several options available, but perhaps one of the best is wire rope. In fact, not only does this method of securement ensure that cargo is safe, it also helps keep people that work in this industry from being injured on a daily basis. How can one seemingly simple product do so much? It’s all in the manufacturing process.

Wire rope is exactly what it sounds like it is. It’s rope made of wire. They key to its strength is in the twisting of the individual strands, which results in a helix. It was invented in Germany by an engineer named Wilhelm Ducay, in the years ranging from 1831 to 1834. Ducay worked in the mining industry and the product was first used for this purpose. It quickly became a favorite choice, especially when compared to the hemp or chains that were previously used. These substances were prone to breakage and failure, which resulted in extensive loss and injury.

Today, wire rope is typically made of steel, and is used with heavy machinery such as cranes or elevators, as well as in cable cars and aerial lifts. It is also used in suspension bridges. Depending on the application, it may be coated with vinyl or nylon. The benefit to these coatings is that it makes the wire rope more resistant to weather, and therefore more durable overall.

Managing Design Complexity

“100% of your design documentation is contained in
the specifications of your information resources.”

- Bryce’s Law

There are many companies today, most overseas, still tackling major systems projects particularly in the areas of banking and manufacturing. These mammoth application development efforts contrast sharply with American companies who have failed in such undertakings and are now content with chipping away at systems, program-by-program, with the hope that disjointed software will somehow/someday interface with each other. Whereas foreign competitors talk in terms of enormous systems with hundreds of programs and millions of lines of code; large integrated systems tend to intimidate the most ardent of American developers. But this is not so much a story about competition as it is about understanding design complexity.

People in both the east and the west recognize the design and development of a total system is no small task. A system can consist of many business processes, procedures, programs, inputs, outputs, files, records, data elements, etc. The problem lies in how to best control these information resources and the design decisions associated with them. Two approaches are typically used: progressively break the problem into smaller, more manageable pieces, or; tackle a minuscule portion of the problem at a time. Whereas the former requires a long term perspective, the latter can show a quick return, which is more appealing to a company with a “fast track” mentality.

Some time ago we conducted a study of customer application development projects. Our research centered on two types of projects: those aimed at building a total system, and; those aimed at building a single program. One obvious conclusion was that the number of information resources used in a major system was considerably more than in a program.

However, the key observation made in the study was that there is a finite number of design decisions associated with each type of information resource. As an example, for an output, decisions have to be made as to its physical media (screen or report), size (number of characters), messages associated with it, etc. For a data element, its logical and physical characteristics must be specified (definition, source, label, size, class, length, etc.). For a program, the language to be used, program logic, required file structures, etc. These design decisions can be simple or complex; regardless, they are all required in order to design a system or a program. When we multiply the number of design decisions by the number of information resources, we get an
idea of the magnitude of a systems design project versus the design of a single program (see Figure 1).

FIGURE 1

NUMBER OF RESOURCES IN AVERAGE SYSTEMS PROJECT: 2,006

NUMBER OF DESIGN DECISIONS TO BE MADE: 49,850

NUMBER OF RESOURCES IN AVERAGE PROGRAM PROJECT: 98

NUMBER OF DESIGN DECISIONS TO BE MADE: 2,070

NOTE: Decisions are design oriented only; they do not include Project Management related decisions (such as those associated with planning, estimating and scheduling).

From this perspective, the average system design project is nearly 25 times larger than the average software design project in terms of complexity. As a footnote, our findings also revealed the “average” system design project is seven times larger than a “complex” software design project.

This discrepancy in system/software complexity provides a clue as to how companies address the problem. Since a software design project is smaller and seemingly more palatable to implement than a total systems project, some companies will focus on software engineering tools and techniques, and abandon total systems engineering practices. This is one reason why programming tools enjoy popularity today.

Contrast this with the size of Japan’s “Best” project to build the country’s next generation of on-line banking systems. This was a major application development effort resulting in 72 “average” systems; a considerably larger project than what is typically addressed in the United States.

MANAGING DECISIONS

There are two aspects to handling decisions: how they are formulated, and how they are controlled.

Trying to make nearly 50,000 design decisions in one step is not only an impossible task, it is a highly impractical way of operating. Just like the design of any product, a system must be designed in gradual phases in such a way as it becomes possible to review and refine the design. In other words, the 50,000 design decisions will be made throughout the life of a development project, not all at once.

It is the responsibility of a systems engineering methodology to define the sequence of events for designing a system. As such, the methodology represents the channel for formulating decisions. Breaking a complex system design down into smaller, more manageable pieces, also provides for:

  • Parallel development and delivery of portions of the system (concurrent development within a single project).
  • An environment conducive for building quality into a product (as opposed to inspecting for quality afterwards).
  • The formulation of Project Management related decisions (such as estimating and scheduling the delivery of systems, in part or in full).

This philosophy of design is no different than any other product
design/development effort, such as shipbuilding, automobile manufacturing, bridge building, etc. All require a specific methodology that breaks the product down to its sub-assemblies and parts; thereby organizing the specification of parts and the design decisions associated with them.

Managing the decision making process for even the smallest of application development projects can be a huge undertaking. We estimate there are approximately 500 design decisions associated in a small software design project (as compared to more than 125,000 decisions in the typical complex system design project). To record and control these decisions requires something more sophisticated than just paper and pencil; it requires an automated “Information Resource Manager” (IRM), a software tool capable of inventorying and documenting an enterprise’s information resources.

Whether you call it an “IRM”, a “Repository”, a “Data Dictionary” or whatever, the philosophical heart of the product is based on the age-old concept of “Bill of Materials” whereby resources (also referred to as “components” or “parts”) are cataloged and cross-referenced to each other. Consider a parts manifest as included in a major appliance maintenance bookley (or lawn/garden tool), I am sure this type of diagram is familiar to any homeowner who has reviewed product maintenance/warranty booklets.

Every part in the product is identified by number and name (see section to the right in the figure). To the left side in the figure is a schematic showing how each part relates to the other parts and, as such, represents the assembly of the product for maintenance purposes.

The concept of “Bill of Materials” provides the means to inventory resources thus allowing us to share and re-use them. For example, many of the parts shown in Figure 2 are re-used in other lawnmower models offered by the manufacturer. How can we share and re-use resources without such a concept? The answer is simple: we cannot. And this explains why there is considerable redundancy in our information resources and work effort. It also suggests most of our design decisions are maintained “by the seat of our pants.” Most college courses involving computing are unfamiliar with the Bill of Materials concept. Their focus is on programming and file design, and little else.

The concept of “Bill of Materials” has three objectives:

  1. To uniquely identify each resource by number and name (as well as by aliases). Names are nice, but numbers offer a more precise way to uniquely identify a resource. Identification is critical. After all, we cannot share and re-use something if we do not know it exists.
  2. To record the part’s specifications. Thus providing a way to determine if the part can be re-used in another product (thereby promoting the sharing of parts and eliminating redundancy).
  3. To record where the part is used in a product(s) (aka “Where-used”). This specifies the relationship of parts to each other and, thereby, their assembly. This is also extremely useful for “impact analysis” whereby we can analyze where the part is used in all of our products, not just one, which is vital for making intelligent decisions about modifying a part. For example, if we change the specifications of a part in one product, this will severely impact other products it is also used in.

By controlling parts in this manner, a product’s design is fully
documented.

The “Bill of Material” concept can easily accommodate information resources and offer the same benefits of sharing and re-using components. By doing so, we can easily manage the 50,000 design decisions accompanying a system design project. Our system/software products may be less tangible than an automobile, aircraft or lawnmower, but we can still apply the same concept to their control.

Therefore, an IRM Repository should have the ability to identify, specify, and cross-reference all of the resources mentioned in Figure 1. This can certainly be done manually with paper but this may lead to bureaucratic and access problems for developers. Instead, automation is recommended. There are several such commercial products on the market, but it is also fairly easy to create such software using today’s Data Base Management Systems (DBMS) which are now fairly easy to define and relate resources (they also provide excellent documentation services).

The IRM should be viewed as the hub of all development efforts and provide the means to interface (import/export) with a myriad of other development tools; e.g., CASE, prototyping aids, program generators, etc. Such tools will use the intelligence of the information resources as contained in the IRM to function accordingly. As an example, a program generator should be able to interpret the program and file specifications in order to produce the necessary code. Such development tools should also have the ability to turn around and import resource specifications back into the IRM. This is particularly useful for documenting existing systems/software (aka “Reverse Population”).

For information on how to create an IRM Repository, please see =>[http://www.phmainstreet.com/mba/pride/spir.htm]

The concept of “Bill of Materials” is an important part of an overall strategy to implement an “Information Factory” environment to design and develop information resources. But this will be the subject of a separate paper.

CONCLUSION

This philosophy to managing design complexity is no different than what is found in the engineering and manufacturing of any product. Engineers break their design projects into smaller stages so that reviews can be performed and revisions implemented. A “bill of materials” processor is used to track
the parts or a product and how they interrelate; which is no different in intent than the IRM tool.

For people imbued in programming, it is difficult to think in terms of “parts” as described herein, but it is a practical solution and can be applied to any development effort, large or small. Standardization and integration of information resources is built by design, not by accident.

Without a formalized methodology for design or an IRM tool to record design decisions, a major system design is incomprehensible; there are just too many variables for the human mind to remember or control using manual techniques. It is not that analysts do not want to take on a major systems design project, they simply cannot. They lack the organization and proper tools to perform the job effectively. Because of this, they default to the things they know best, programming, and tackle systems in piecemeal.

The difference between east and west here is not one of working harder, but smarter. The Japanese and Europeans are simply better organized and equipped to perform system design than their American counterparts. This can be attributed, in large part, to management’s sensitivity to the role systems play in a company. Because of this, they are not afraid to tackle large endeavors, while American companies view such undertakings as seemingly too massive to undertake. As such, they sidestep large projects in favor of smaller projects that may address only a portion of the overall problem. This is resulting in the unsettling situation where our competitors are rapidly becoming the world’s systems engineers, while Americans become the world’s software engineers.

For more information on our philosophies of Information Resource Management (IRM), please see the “Introduction” section of “PRIDE” at =>http://www.phmainstreet.com/mba/pride/intro.htm#irm