“The goal of this course is to prepare you for your technical future.”

Hi, Habr. Remember the awesome article
"You and your work" (+219, 2442 bookmarks, 394k readings)?
So Hamming (yes, yes, self-monitoring and self-correcting
Hamming codes ) has a whole
book based on his lectures. We translate it, because the man is talking.
This book is not just about IT, it is a book about the thinking style of incredibly cool people.
“This is not just a charge of positive thinking; it describes the conditions that increase the chances of doing a great job. ”We have already translated 27 (out of 30) chapters. And
we are working on the publication "in paper."
Chapter 18. Modeling - I
(For the translation, thanks to Valentin Pinchuk, who responded to my call in the “previous chapter.”) Who wants to help with the translation - write in a personal or mail magisterludi2016@yandex.ruAn important direction in the use of computers in our time, in addition to entering and editing text, graphics, programming, etc., is modeling.
Modeling is the answer to the question: "What if ...?"
What if we do this? What if this is what happened?
More than 9 out of 10 experiments are currently performed on computers. I have already mentioned my serious concern that we are increasingly dependent on modeling and less and less investigating reality, and seem to be approaching the old scholastic approach: what is written in textbooks is a reality and does not require constant experimental checks. But I will not now dwell on this issue in detail.
We use computers for modeling, as this:
- first, cheaper;
- secondly, faster;
- thirdly, usually better
- fourthly, it gives the opportunity to do what cannot be done in the laboratory.
The first two points state that even taking into account the cost of money and time for programming, all its mistakes and other shortcomings, it is still much cheaper and faster than getting the required laboratory equipment for work. Moreover, if in recent years you have ordered expensive, high-quality laboratory equipment, in less than 10 years you will find that it should be written off as obsolete. These arguments are not suitable if the situation is constantly monitored, and laboratory equipment is constantly used. But let him stay idle for a while, and suddenly it will stop working properly! This is called the "shelf life", but it is sometimes the "shelf life" of skills to use it, and not the "shelf life" of the equipment itself! I too often became convinced of this on my own personal experience. Intellectual shelf life is often more cunning than physical shelf life.
According to the third point, we can get more accurate data from modeling than from direct measurement in the real world. Field or even laboratory measurements are often difficult to obtain with the required accuracy in a dynamic setting. In addition, when modeling, we can often work in a much wider range of independent variables than is possible with any laboratory setup.
According to the fourth point, probably the most important of all, the simulation is capable of doing what no experiment can do.
I will illustrate these points with concrete situations in which I personally participated, so that you can understand how modeling can be useful for you. I will also point out some of the details by which those with little experience in modeling will get a better idea of how to approach it, because it is unrealistic to perform modeling that will take years to complete.
The first big calculations in which I participated were in Los Alamos during World War II, when we designed the first atomic bomb. There was no opportunity to conduct a full-scale experiment on a smaller scale - either you have a critical mass, or there is none.
Without going into the secret details, I remind you that one of the two projects was spherically symmetrical and based on explosive initiation, Fig. 18.I.
The entire volume of the bomb material was divided into concentric spherical shells. The equations of forces acting on each shell (on both its sides), as well as the equations of state, which described, in addition to other parameters, the density of a substance depending on the pressure on it, were compiled. Then the time axis was divided into intervals of 10-8 s. For each time interval we calculated with the help of computers, how each shell will move and what will happen to it at this time, under the influence of the forces applied to it. There was, of course, a separate study for the process of passing the shock wave from the surrounding explosive through this area. But all laws, in principle, were well known to experts in their respective fields. The pressure was such that one had only to speculate that everything would go about the same and beyond the limits of the tests performed, but even an approximate physical theory provided some guarantees.
Fig.18.I.This just illustrates the main point on which I want to stop. It is necessary to have extensive and deep special knowledge in the subject area. Actually, I am inclined to consider the many courses that you have already studied, and those that are still to be studied, as the only means of obtaining relevant expert knowledge. I want to emphasize this obvious need for domain expertise — too often I have seen how modeling experts ignore this basic fact and believe that they can safely perform modeling on their own. Only a subject matter expert can know whether what you couldn’t include in the model is vital to the accuracy of the simulation, or can this be safely neglected.
Another important point is that in most cases modeling is such a stage that repeats over and over again, many times, with the same program, otherwise you will not be able to initialize the data. In the case of a bomb, the same calculations were performed for each shell and then for each time interval - a myriad number of repetitions. In many cases, the computational power of a machine exceeds many times our programming capabilities, so it is advisable to look for the repetitive parts of the upcoming modeling in advance and constantly, and, if possible, to carry out the modeling accordingly.
Very similar to the task with a nuclear bomb and modeling in weather forecasting. In this case, the atmosphere is broken into large blocks of air, and for each block, the values of cloud cover, albedo, temperature, pressure, humidity, speed, etc., should be initialized, see Fig. 18.II.
Then, using conventional atmospheric physics, we monitor the corresponding changes of each block in a small time interval. This is the same element calculation method as in the previous example.
However, between the two tasks, with the bomb and the weather forecast, there is a significant difference. For a bomb, small deviations in the simulated process do not significantly affect the overall
performance, but the weather, as you know, is very sensitive to small changes. It is believed that even the flap of a butterfly's wings in Japan can affect whether a storm hits this country and how severe it will be.
Fig.18.IIThis is a fundamental topic on which I must dwell. If the simulation has a stability margin, in the sense of resistance to small changes in its overall behavior, then the simulation is quite realistic; but if small changes in some details can lead to very different results, then modeling is difficult to perform accurately. Of course, there is also long-term stability in the weather: the seasons follow their designated whorling, regardless of small deviations. Thus, there is both short-term (day-to-day) weather instability and long-term (year to year) stability. And the ice ages show that there are even more long-term weather instabilities, and, undoubtedly, even more lasting stability!
I have come across a lot of problems of this kind. It is often very difficult to determine in advance whether stability or instability will dominate the task and, therefore, assess the possibility of obtaining the desired results. When you do a simulation, carefully study this aspect of the task before you dive too deep into it not to discover later, after spending a lot of effort, money and time, that you are not able to get acceptable results. Thus, there are situations that are easy to model, situations that are practically not simulated at all, and most of the rest are between these two extremes. Be cautious about the promises of what you can do with modeling!
When I joined Bell Telephone Laboratories in 1946, I soon took part in the early design stages of the very first NIKE guided missile system. I was sent to the Massachusetts Institute of Technology to use their RDA # 2 differential analyzer. There I received knowledge of the interrelation of the parts of the analyzer and a lot of tips from specialists who are much more sophisticated in conducting simulations.
In the initial draft, an inclined launch of the rocket was envisaged. Variational equations provided me with the ability to fine-tune various components, such as wing size. I think it should be mentioned that the calculation of one trajectory took about ½ hour, and about half of this time I had to convince myself to calculate the next launch. Therefore, I had enough time for observations and deep reflection on why everything went as it went. A few days later, I gradually got a “sense” of the behavior of the rocket, why it behaves as it does under the different laws of guidance that I applied.
Over time, I came to the conclusion that the vertical start was the best ever. A quick exit from the dense lower layers of the air to the rarefied was the best strategy - I could well afford to add air resistance after when commands were given to decline the trajectory. In doing so, I found that I significantly reduced the size of the wings. I also realized quite well that the equations and constants that were given to me to evaluate changes in the effects caused by changes in the rocket structure can hardly be accurate in such a large range of parameter changes (although they never told me the original equations, I guessed ). So I called for advice and found that I was right - I should go home and get new equations.
With some delay due to the desire of other users to use their time on RDA # 2, I soon returned to work, already more experienced and sophisticated. I continued to develop a rocket's sense of behavior — I had to “sense” the forces acting on it, using different trajectory formation programs. And the waiting time, when the solution slowly appeared on the plotter, gave me the opportunity to understand what was happening. I often wonder, what would happen if I had a modern, high-performance computer? Would I ever get that rocket feeling on which so much depends on the final draft? I often doubt that the extra hundreds of trajectories would teach me the same way - I just don’t know. But it is precisely for this reason that to this day I am suspicious of receiving a multitude of calculations without due consideration of what you have received. The volume of results seems to me a poor substitute for the feeling of penetration into the simulated situation.
The results of these first runs led us to choosing a vertical start (which saved us from unnecessary ground equipment in the form of a circular guide and other devices), simplified the design of many other components and reduced the size of the wings to about 1/3 of the size that I was initially asked. I found that large wings, while providing greater maneuverability in principle, increase the air resistance in the early sections of the trajectory so much that, as a result, a lower flight speed leads to less maneuverability in the final approach phase with the target.
Of course, at the early stage of modeling, a simple atmosphere model of exponential decrease in density with altitude and other simplifications were used, which were changed in the subsequent stages. This gave me another conviction - the use of simple models in the early stages allows you to get a general idea of the entire system, which will inevitably be disguised in any full-scale simulation. I strongly recommend starting with simple modeling and then developing it to a more complete, more accurate, so that an understanding of the essence can come as soon as possible. Of course, when choosing the final design, you should consider all the nuances that may affect it. But (1) start as simple as you can, provided that you take into account all the main influences, (2) get a general idea, and then (3) go into the details of the simulation.
Guided missiles were one of the earliest studies in the field of supersonic flight, and this problem was another big uncertainty. The data of the two supersonic wind tunnels that are only available to us are frankly contradictory.
Guided rockets naturally led to space flight, where I was less involved in the modeling itself, and more as an external consultant and in the initial planning of the so-called cyclogram of the project.
Another of the first simulations that I recall was the design of a traveling wave tube. Again, on primitive relay equipment, I had a lot of time to think, and I realized that I could, as the calculations were carried out, understand what shape should be given, besides the traditional tube of constant diameter. To understand how this happened, consider the basic design of a traveling wave tube. The idea is that you send an input wave along a helix tightly wound around a hollow tube, and therefore, the effective speed of an electromagnetic wave through the tube is significantly reduced. Then we send an electron beam along the tube axis.
The beam has initially greater speed than the wave traveling along the helix. The interaction of the wave and the beam leads to a slowing down of the electron beam - which means the transfer of energy from the beam to the wave, that is, the amplification of the wave! But, obviously, in some place the pipe their speeds are approximately aligned, and then further interactions only worsen the situation. As a result, I had the idea that if you gradually increase the diameter of the pipe (and, consequently, the path traveled by the wave through the turns of the helix - approx. Translator), the beam will again become faster than the wave and more energy will be transmitted from beam to wave. Indeed, on each calculation cycle, it was possible to calculate the ideal pipe profile.
I have had unpleasant finds. As a rule, the equations used in fact usually were local linearizations of more complex non-linear equations. Approximately to the twentieth-fiftieth step of the calculation, I could estimate the non-linear component. I found that to the amazement of the researchers on some projects, the estimated non-linear component was larger than the calculated linear component — thereby killing the approximation and stopping the useless calculations.
Why tell this story? Because it vividly demonstrates that an inquiring mind can help in modeling, even if you work alongside experts in the field where you are an amateur. Feeling with your own hands every little detail, you have a chance to see what others did not notice, and make a significant contribution, as well as save machine time! How often have I found omissions in modeling that users of his results are unlikely to recognize.
There is an important step that you must take, and I want to emphasize this: master special jargon. Each specialty has its own jargon, which is trying to hide what is happening from outsiders, and sometimes from insiders! Watch jargon — learn to recognize it as a special language to facilitate communication in a narrow area of things or events. However, it impedes thinking outside the original area for which it was intended. Jargon is both a necessity and a curse. You need to understand that you need to strain your brains to take advantage of it and avoid traps, even in your own area of expertise!
During the long years of evolution, cavemen apparently lived in groups ranging in size from 25 to 100 people. People from outside, as a rule, were not welcomed, although we believe this does not apply to abducted wives. Comparing many centuries of caveman evolution with a century of civilization (less than ten thousand years), we see that we have chosen evolution mainly to isolate strangers, and one of the ways to do this is to use special slang tongues. Thieves' slang, group slang, the private language of a husband and wife from words, gestures and even raised eyebrows are all examples of using a private language to isolate outsiders. Consequently, this instinctive use of jargon, when a stranger comes, must always be consciously resisted - we are now working in much larger groups than cavemen, and must constantly try to rewrite this feature of the initial stage of our development.
Mathematics is not always the magic language you need. To illustrate this, let us return to the casual modeling of sea interception mentioned by me, equivalent to a system of 28 differential equations of the first order. But it is necessary to open the plot. Ignoring all but its essential part, we consider the problem of solving a single unique differential equation
y '= f (x, y) with | y | ≤1 , see Fig.18.III.
Remember this equation, and I will talk about the real problem. I programmed a real problem, a system of 28 differential equations to get a solution, and then limited some values to 1, as if it were a voltage constraint. Despite the resistance of the consultant, my friend, I insisted that he fully participate in the binary programming of the problem with me, and I explained to him what was happening at each stage. I refused to pay until he did it - so he had no other choice! We got to the restrictions in the program, and he said: “Dick, this is a stabilizer limit, not a voltage limit,” meaning that the restriction should be imposed at each step of the calculation, and not as a result.
This is the best example I’ve known to demonstrate how we both understood exactly what mathematical symbols mean - we both had no doubts - but our interpretations of these symbols turned out to be completely different!
Fig.18.III.If we had not caught this error, I doubt that any real, live experiments with the participation of the aircraft would reveal a decrease in maneuverability, which was obtained from my interpretation. That is why, to this day, I insist that a person with a deep understanding of what should be modeled should be involved in detailed programming. If you do not do this, then you may encounter similar situations when both the consultant and the programmer know exactly what is meant, but their interpretations may be so different that they will lead to completely different results!You should not get hung up on the idea that modeling is conducted solely for functions of time. One of the tasks that I was assigned to investigate on a differential analyzer, assembled by us from the old parts of the M9 anti-aircraft fire control device, was to calculate the probability distributions of locks in the central office. It does not matter that they gave me an infinite system of interrelated linear differential equations, each of which asked the probability distribution of the number of calls to the central office, depending on the total load. It was necessary to somehow get out on the final machine, which had only 12 integrators, as I recall.I took it for the full input impedance of the circuit. Using the difference between the last two calculated probabilities, I assumed that they are proportional to the difference between the following two (I used a reasonable proportionality constant, derived from the difference from the two previous functions). Thus, it was possible to correctly obtain the contribution from the next, not yet calculated equation. The results were in demand in the switching department and, I believe, made an impression on my boss, who still had a low opinion about computers.There was also an underwater simulation, especially mentioning the acoustic array installed in the Bahamas by my friend, where, of course, in the harsh winter (the author jokes - translator note) he often had to go check everything and make new measurements. Many simulations of the design and behavior of transistors were carried out.They modeled microwave relay stations with their receiving horns, as well as the impact of a pulse at one end of a chain of relay stations, as it passes through the entire chain, on the stability of the entire system of these stations. It is possible that, even with the rapid recovery of each station from the pulse, its size may grow as it crosses the continent. At each relay station, there was stability, in the sense of damping of an impulse in time, but the question of spatial stability remained open - what if suddenly a random impulse can grow endlessly while it crosses the continent? I called this problem "spatial stabilization." We had to know the conditions under which this could happen, or could not happen - therefore, modeling was necessary because, among other things,the very form of the impulse changed as it passed through the continent.I hope that you understand: it is in principle possible to simulate any situation that lends itself to some mathematical description. But in practice, you should be very careful when modeling unstable situations. Although in chapter 20 I will tell you about one extreme case that I had to solve. It was very important for Bell Telephone Laboratories, and meant, at least for me, that I had to get a decision, no matter what excuse I gave myself that it was impossible. Any answers to important problems will always be found if you are determined to receive them. They may not be perfect, but in a stalemate is something better than nothing - provided it is trustworthy!Mistakes in modeling were very often forced to abandon good ideas! However, there is little that can be found in the literature, since they were very, very rarely reported. One well-known erroneous model, which was widely announced even before its mistakes were discovered by others, was a model of the whole world, created by the so-called “Rome Club”. It turned out that their chosen equations should have shown a catastrophe regardless of the initial data or the choice of most of the coefficients! But when others got these equations and tried to repeat the calculations, it turned out that the calculations have serious errors! I’ll go over to this aspect of modeling in the next chapter, because it’s a very serious question — either to report on things that make people believe in what they want to believe, although these things aren’t at all or thingsthat will disappoint people from following their ideals.( 70- XX . « » (1971). , 20- . , . – , , , , . , . – .)To be continued...Who wants to help with the translation, layout and publication of the book - write in a personal or mail magisterludi2016@yandex.ruBy the way, we also launched another translation of the coolest book -
“The Dream Machine: The History of Computer Revolution” )
Book content and translated chaptersForeword- Intro to the Art of Doing Science and Engineering: Learning to Learn (March 28, 1995) Translation: Chapter 1
- Foundations of the Digital (Discrete) Revolution (March 30, 1995) Chapter 2. Basics of the digital (discrete) revolution
- “History of Computers - Hardware” (March 31, 1995) Chapter 3. Computer History — Iron
- History of Computers - Software (April 4, 1995) Chapter 4. Computer History - Software
- History of Computers - Applications (April 6, 1995) Chapter 5. Computer History — A Practical Application
- “Artificial Intelligence - Part I” (April 7, 1995) Chapter 6. Artificial Intelligence - 1
- Artificial Intelligence - Part II (April 11, 1995) Chapter 7. Artificial Intelligence - II
- Artificial Intelligence III (April 13, 1995) Chapter 8. Artificial Intelligence-III
- N-Dimensional Space (April 14, 1995) Chapter 9. N-Dimensional Space
- “Coding Theory - The Representation of Information, Part I” (April 18, 1995) (translator disappeared: ((()
- Coding Theory - The Representation of Information, Part II (April 20, 1995) Chapter 11. Coding Theory - II
- “Error-Correcting Codes” (April 21, 1995) Chapter 12. Error Correction Codes
- Information Theory (April 25, 1995) (translator disappeared: ((()
- Digital Filters, Part I (April 27, 1995) Chapter 14. Digital Filters - 1
- Digital Filters, Part II (April 28, 1995) Chapter 15. Digital Filters - 2
- Digital Filters, Part III (May 2, 1995) Chapter 16. Digital Filters - 3
- Digital Filters, Part IV (May 4, 1995) Chapter 17. Digital Filters - IV
- Simulation, Part I (May 5, 1995) Chapter 18. Simulation - I
- Simulation, Part II (May 9, 1995) Chapter 19. Modeling - II
- Simulation, Part III (May 11, 1995)
- Fiber Optics (May 12, 1995) Chapter 21. Fiber Optics
- Computer Aided Instruction (May 16, 1995) (translator disappeared: ((()
- "Mathematics" (May 18, 1995) Chapter 23. Mathematics
- Quantum Mechanics (May 19, 1995) Chapter 24. Quantum Mechanics
- Creativity (May 23, 1995). Translation: Chapter 25. Creativity
- Experts (May 25, 1995) Chapter 26. Experts
- Unreliable Data (May 26, 1995) Chapter 27. Unreliable Data
- Systems Engineering (May 30, 1995) Chapter 28. System Engineering
- "You Get What You Measure" (June 1, 1995) Chapter 29. You Get What You Measure
- “How do we know what we know” (June 2, 1995) missing translator: (((
- Hamming, “You and Your Research” (June 6, 1995). Translation: You and Your Work
Who wants to help with the translation, layout and publication of the book - write in a personal or mail magisterludi2016@yandex.ru