Yet Another Coder

What is premature optimization

Several generations of software folklore capture the negative attitude towards premature optimization. But what is “premature” optimization exactly? I present some ideas about that today.

“Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%.” – Donald Knuth

“More computing sins are committed in the name of efficiency (without necessarily achieving it) than for any other single reason — including blind stupidity.” — W.A. Wulf

“The First Rule of Program Optimization: Don’t do it. The Second Rule of Program Optimization (for experts only!): Don’t do it yet.” — Michael A. Jackson

I think one missing piece in all the premature optimization talk is that it really isn’t about code optimization per se, but about software design strategy in general. Someone claiming that code optimization is “premature” is supposedly able to show that another design strategy is “mature” in contrast. I like to think about design as a search problem: you are in some point A in a multi-dimensional design space, and you need to get to some area B, determined by the new user stories you must support and the acceptable ranges of the myriads of characteristics your solution has to fulfill. Your design documents describe the path you’re taking inside that multi-dimensional space to get from A to B (or at least to some point C that is closer to B than A was).  Changes you commit to the repository are the actual moves in the design space, and the amount of time you spend on each change is the actual “cost” of that move.

Now the tricky part is that the design space is not static: every move you make (and every bond you break) can potentially change the cost of  any other move in the space. The extent of this potential change is determined by the nature of the move and by the “interconnectedness” characteristics of your current design, such as coupling between and coherence of the modules. Naturally, every move also takes you into a different point in the space, so the distances from your position to other points in the space change. To put it simply, developing software is like solving a billion-sided Rubik’s cube, where the stickers are also allowed to change colors after each rotation. But, I digress.

What would be the difference then between a “premature” optimization and a non-premature? A premature optimization is the optimization that doesn’t bring you closer to your desired area B. It is either a move in the direction opposite from your real goal, or a move that changes the costs of other moves unfavorably, so the distance to the real goal increases. Two examples illustrate these cases:

1. Ms Piggy decides to optimize database access performance: she noticed that some extra queries are being sent by the intermediate abstraction layer, so she decides to work around this layer and directly send the database queries from several place in code. She spends two weeks to finish this work and her tests show that the scenario is now faster. She doesn’t realize that the performance of this scenario is of low importance to users, because it runs nightly as part of an automatically scheduled db maintenance job; in fact, users are more concerned about the frequent connection failures. A few weeks later, someone has to undo Ms Piggy’s change to update the intermediate layer API to improve logging and connection reliability diagnostics. Ms Piggy’s optimization was “premature”, because it was going in the direction opposite from the overall design goal and had to be undone.

2. Kermit decides to implement his module “with performance in mind”, therefore he avoids calling into existing methods from other modules, instead implementing his own, “more efficient” versions of those methods. Most of the code in the module would not have caused performance issues, even if it had used the existing methods. However, for the next few years, it increases the cost of all changes touching this module. Designers do not realize the specific implementation of the module and underestimate the costs of the design changes. Coders do not realize the specific implementation of the module and introduce subtle and costly defects. Slowly the module is updated to call into shared libraries and the amount of duplicate logic is decreased. Kermit’s optimization increased costs of the changes that followed it without justification, thus the optimization was “premature”.

 “All of this reinforces the key rule that first you need to make your program clear, well factored, and nicely modular. Only when you’ve done that should you optimize.” — Martin Fowler, Yet Another Optimization Article

“…the strategy is definitely: first make it work, then make it right, and, finally, make it fast.” — Stephen C. Johnson and Brian W. Kernighan,

Really, the “no premature optimization” rule is merely a heuristic for the search of the optimal path in the design space: “do not use a greedy algorithm to optimize design”. Often to get to the desired state you have to make multiple moves of the “refactoring” nature, which do not “improve” any observable program behavior at all, all they do is decrease the costs of the further moves towards the goal.

Finally, among many reasons why people make wrong design moves, I think, miscommunication or insufficient communication is a major one. This situation is related to the “inner-outer world” dilemma, and premature optimization is one of the examples of that tree falling in the forest, when no one is around to hear. To avoid wrong moves, designers and coders need to have a clear shared vision of customers’ needs.

“Users are more interested in tangible program characteristics than they are in code quality. Sometimes users are interested in raw performance, but only when it affects their work. Users tend to be more interested in program throughput than raw performance. Delivering software on time, providing a clean user interface, and avoiding downtime are often more significant.” — Steve McConnell, Code Complete, 2nd ed.

Changing the outer world

Coders’ work is all about the inner world: ideas, concepts, logic, abstractions. Mindfulness. It’s difficult not to be sucked into “the Zone” after you’ve learned programming: it feels magical how a snippet of text typed into a machine can cause it to do something cool. The inner world is neat and tidy, and you’ve practically built it from nothing, and you know how it works, and you can change it at the speed of thought. That’s why coders don’t like slow build systems, slow unit tests or slow third-party libraries — because they stand in the way of building and running The World!

“Some people look at the world through rose-colored glasses. Programmers like you and me tend to look at the world through code-colored glasses. We assume that the better we make the code, the more our clients and customers will like our software.” — Steve McConnell, Code Complete, 2nd ed.

Sometimes coders get so over-obsessed with the inner world, they forget that the outer world is still here. There is time and place for unconstrained art and pure abstract research, but I’m talking about us, mortals, working for a company, which exists for a purpose and makes a product to sell. Making a product, which someone wants to pay their money for, isn’t an easy task, and it involves a lot of efforts to understand, influence and change the outer world. And here’s the main “problem” with the outer world — it is very slow compared to the inner one. It has inertia. And it has other things to pay attention to.

So instead of learning the mechanisms of changing the outer world, coders start to ignore it. It’s too slow to work with. It “lags” too much behind the “cutting edge”. It’s not under control. That’s when coders lose touch. They start working on something that has no real value: refactoring without purpose, optimizing unused parts of the program, discussing the benefits of placing a curly bracket on a separate line. It’s like that tree that falls in the forest, when no one is around to hear.

Don’t lose touch. Think about the lifetime of your code. How is it supposed to unfold? How will your customers learn about the new feature? Who is going to demo it? Do they have all the documentation they need? Is there a clear shared understanding of the new feature, or are there several different views?

More can be done while the feature is shaping, than after it’s gone public. Fewer people has to be involved and it’s easier to reach a consensus and coordinate your work. Others will help you, but make sure the forces are applied in the same direction. As author and designer, you have a great deal of influence on how the feature is documented, presented, explained, demoed. Do what’s in your power to help get the right message out and change the outer world. Next time you’re cutting down a tree, make sure there’s enough media coverage for the event and that everyone on your team agrees on which tree is supposed to fall.

Make the paragraph the unit of code composition

In this post I discuss similarities between coding and writing prose; then deduce a coding style guideline.

A colleague has recently sent me a code snippet and asked if there was any way to improve it. The snippet pretty much consisted of a single screen-long screen-wide LINQ statement1.  Long chains of calls are bread-and-butter of LINQ, but maybe splitting them into smaller snippets would help clarify the intent of the code? People, including the code author, will read and re-read the code, and it is in their shared best interest to be able to understand the code quickly and without ambiguity. Help your code reviewers to understand and verify the code. Help future maintainers to navigate the code and find that statement they need to fix or update.

Traditionally, low-level code is accompanied by some varying amount of free-text comments, symbolic names, mnemonics to “translate” the implementation at hand into high-level human-readable intent of the code to allow quicker understanding.  LINQ (among other technologies) brings higher-level programming concepts and abstractions, making your code closer to free text than ever before, reducing the need for extra comments. Why comment when you can write a programming language statement equally clear for a human as well as a robot?  The question whether robots eventually would be able to understand your PM’s spec or even your dev design document without your help, remains open, so for now the best way to make our code to work as designed is to make the code read like the spec and let other people verify it. Thus our high-level code becomes sort of a spec and the process of coding becomes sort of technical prose writing. The good news here is that humankind has quite an experience in the area, working on prose writing for the last couple of thousand years. There were ups and downs along the way, certainly, likewhentheworddelimiterswentoutoffavorforawhile, but all in all over time some good guidelines were established to make our written communication more efficient.

Before the advent of the codex (book), Latin and Greek script was written on scrolls. Reading continuous script on a scroll was more akin to reading a musical score than reading text. The reader would typically already have memorized the text through an instructor, had memorized where the breaks were, and the reader almost always read aloud, usually to an audience in a kind of reading performance, using the text as a cue sheet. Organizing the text to make it more rapidly ingested (through punctuation) was not needed and eventually the current system of rapid silent reading for information replaced the older slower performance declaimed aloud for dramatic effect.


Strunk & White

One of the most influential style-guides for English is “The Elements of Style” by Strunk and White (wiki, amazon, goodreads). I’ve already lost count of the occasions when that book has helped me to improve my e-mails and documents. And here is Strunk and White on the Paragraph:

13. Make the paragraph the unit of composition.

The paragraph is a convenient unit; it serves all forms of literary work. As long as it holds together, a paragraph may be of any length — a single, short sentence or a passage of great duration.

See, the paragraph is a convenient unit for literary work, but if our code is a spec then it can serve us too! Why not use it as a unit of code composition? A block of lines separated from the rest of the code by empty lines can be treated as a paragraph in our coding analogy.


If the subject on which you are writing is of slight extent, or if you intend to treat it briefly, there may be no need to divide it into topics. Thus, a brief description, a brief book review, a brief account of a single incident, a narrative merely outlining an action, the setting forth of a single idea — any one of these is best written in a single paragraph. After the paragraph has been written, examine it to see whether division will improve it.

Ordinarily, however, a subject requires division into topics, each of which should be dealt with in a paragraph. The object of treating each topic in a paragraph by itself is, of course, to aid the reader. The beginning of each paragraph is a signal that a new step in the development of the subject has been reached.

Ok, so this says if our method is simple or brief, it can consist of a single paragraph of code. But we should always re-read the code and see if we can aid the readers by splitting the statements into several paragraphs, each of them focused on a single topic.


As a rule, single sentences should not be written or printed as paragraphs. An exception may be made of sentences of transition, indicating the relation between the parts of an exposition or argument.


As a rule, begin each paragraph either with a sentence that suggests the topic or with a sentence that helps the transition. If a paragraph forms part of a larger composition, its relation to what precedes, or its function as a part of the whole, may need to be expressed. This can sometimes be done by a mere word or phrase (again, therefore, for the same reason) in the first sentence. Sometimes, however, it is expedient to get into the topic slowly, by way of a sentence or two of introduction or transition.

In our analogy, we should avoid code paragraphs consisting of a single statement, and each paragraph should begin with a statement or a type or an identifier that will suggest the topic of the whole paragraph. Now we can see how this can help the readers — just by scanning through the paragraphs one can comprehend the main steps in the method’s implementation and after that, knowing the higher-level structure dig deeper into the details. This can help reviewers to understand and verify the overall design. This can also help the maintainers to find the paragraph they need to fix or improve.


In general, remember that paragraphing calls for a good eye as well as a logical mind. Enormous blocks of print look formidable to readers, who are often reluctant to tackle them. Therefore, breaking long paragraphs in two, even if it is not necessary to do so for sense, meaning, or logical development, is often a visual help. But remember, too, that firing off many short paragraphs in quick succession can be distracting. Paragraph breaks used only for show read like the writing of commerce or of display advertising. Moderation and a sense of order should be the main considerations in paragraphing.

Ah, another point in favor of paragraphs — visual aid for the readers, catching their eyes and helping the navigation. Too few paragraphs — and the readers are scared. Too many paragraphs — and the readers are distracted.


To summarize: paragraphs are a well-known practice to improve literary prose, and this practice can be applied to your high-level code. Split large blocks of lines into smaller topical chunks. Just as a text is structured with whitespaces, punctuation, paragraphs and chapters, delimit the code you write into identifiers, statements, groups of statements, methods, classes, components, etc. Split long LINQ chains into smaller chains labeled with an identifier, explaining the result of the particular chain.

Check out this short ParallelGrep sample code, see how cleanly the paragraphs of code are split, you can quickly scan through the method to understand the main steps. Textual comments introduce paragraphs of code, however they don’t echo the code, merely set the topic of the code paragraph.


P.S. Interesting enough, there is also value in going the other way: from your code writing to your prose writing. The Pragmatic Programmer book reveals that “English is Just a Programming Language”, therefore “Write documents as you would write code: honor the DRY principle, use metadata, MVC, automatic generation, and so on”.

  1. Some folks may recall here the “old” single-statement LINQ ray tracer []

Pen and Parchment

A nice summary of the main design principles and overview of illustrations from Edward Tufte‘s books presented by the author.

Corporate memome

Dawkins writes that he introduced the term “meme” as analogue for “gene”: memes replicate themselves in the cultural pool like genes replicate themselves in the gene pool. So then, I think, companies should be referring to “corporate memome”, rather than “corporate DNA”. I assume, most companies are actually more interested in evolving the ideas in their internal meme pools, rather than in breeding better employee species.

Update: Wikipedia has an article Organizational memory. Perhaps, that is the same as memome.

Zen and the Art of Motorcycle Maintenance

Zen and the Art of Motorcycle Maintenance (cover) Finished reading “Zen and the Art of Motorcycle Maintenance: An Inquiry Into Values” by Robert M. Pirsig (Goodreads, Amazon). It’s amazing. Recommended.

I suppose you could call that a personality. Each machine has its own, unique personality which probably could be defined as the intuitive sum total of everything you know and feel about it. This personality constantly changes, usually for the worse, but sometimes surprisingly for the better, and it is this personality that is the real object of motorcycle maintenance. The new ones start out as good-looking strangers and, depending on how they are treated, degenerate rapidly into bad-acting grouches or even cripples, or else turn into healthy, good-natured, long-lasting friends. This one, despite the murderous treatment it got at the hands of those alleged mechanics, seems to have recovered and has been requiring fewer and fewer repairs as time goes on.

All the time we are aware of millions of things around us—these changing shapes, these burning hills, the sound of the engine, the feel of the throttle, each rock and weed and fence post and piece of debris beside the road—aware of these things but not really conscious of them unless there is something unusual or unless they reflect something we are predisposed to see. We could not possibly be conscious of these things and remember all of them because our mind would be so full of useless details we would be unable to think. From all this awareness we must select, and what we select and call consciousness is never the same as the awareness because the process of selection mutates it. We take a handful of sand from the endless landscape of awareness around us and call that handful of sand the world.

The handful of sand looks uniform at first, but the longer we look at it the more diverse we find it to be. Each grain of sand is different. No two are alike. Some are similar in one way, some are similar in another way, and we can form the sand into separate piles on the basis of this similarity and dissimilarity. Shades of color in different piles—sizes in different piles—grain shapes in different piles—subtypes of grain shapes in different piles—grades of opacity in different piles—and so on, and on, and on. You’d think the process of subdivision and classification would come to an end somewhere, but it doesn’t. It just goes on and on. Classical understanding is concerned with the piles and the basis for sorting and interrelating them. Romantic understanding is directed toward the handful of sand before the sorting begins. Both are valid ways of looking at the world although irreconcilable with each other.

An untrained observer will see only physical labor and often get the idea that physical labor is mainly what the mechanic does. Actually the physical labor is the smallest and easiest part of what the mechanic does. By far the greatest part of his work is careful observation and precise thinking. That is why mechanics sometimes seem so taciturn and withdrawn when performing tests. They don’t like it when you talk to them because they are concentrating on mental images, hierarchies, and not really looking at you or the physical motorcycle at all. They are using the experiment as part of a program to expand their hierarchy of knowledge of the faulty motorcycle and compare it to the correct hierarchy in their mind. They are looking at underlying form.

Sometime look at a novice workman or a bad workman and compare his expression with that of a craftsman whose work you know is excellent and you’ll see the difference. The craftsman isn’t ever following a single line of instruction. He’s making decisions as he goes along. For that reason he’ll be absorbed and attentive to what he’s doing even though he doesn’t deliberately contrive this. His motions and the machine are in a kind of harmony. He isn’t following any set of written instructions because the nature of the material at hand determines his thoughts and motions, which simultaneously change the nature of the material at hand. The material and his thoughts are changing together in a progression of changes until his mind’s at rest at the same time the material’s right.

Schools teach you to imitate. If you don’t imitate what the teacher wants you get a bad grade. Here, in college, it was more sophisticated, of course; you were supposed to imitate the teacher in such a way as to convince the teacher you were not imitating, but taking the essence of the instruction and going ahead with it on your own. That got you A’s. Originality on the other hand could get you anything—from A to F. The whole grading system cautioned against it.

Grades really cover up failure to teach. A bad instructor can go through an entire quarter leaving absolutely nothing memorable in the minds of his class, curve out the scores on an irrelevant test, and leave the impression that some have learned and some have not. But if the grades are removed the class is forced to wonder each day what it’s really learning. The questions, What’s being taught? What’s the goal? How do the lectures and assignments accomplish the goal? become ominous. The removal of grades exposes a huge and frightening vacuum.

Mental reflection is so much more interesting than TV it’s a shame more people don’t switch over to it.

You’ve got to live right too. It’s the way you live that predisposes you to avoid the traps and see the right facts. You want to know how to paint a perfect painting? It’s easy. Make yourself perfect and then just paint naturally. That’s the way all the experts do it. The making of a painting or the fixing of a motorcycle isn’t separate from the rest of your existence. If you’re a sloppy thinker the six days of the week you aren’t working on your machine, what trap avoidances, what gimmicks, can make you all of a sudden sharp on the seventh? It all goes together.

Peace of mind isn’t at all superficial to technical work. It’s the whole thing. That which produces it is good work and that which destroys it is bad work. The specs, the measuring instruments, the quality control, the final check-out, these are all means toward the end of satisfying the peace of mind of those responsible for the work. What really counts in the end is their peace of mind, nothing else. The reason for this is that peace of mind is a prerequisite for a perception of that Quality which is beyond romantic Quality and classic Quality and which unites the two, and which must accompany the work as it proceeds.

Using CMKD to find function arguments in x64 dumps

Finding the root cause of a 64-bit native service crash using a mini dump may be difficult. The main challenge is that the stack trace left by the optimized code implementing the x64 calling convention lacks important data: the first four function arguments are not saved on the stack typically but just passed in registers. You may endeavor to analyze the assembly code up and down the call graph to find out whether any of those registers got pushed on stack somewhere else or were stored in memory, but this could be time-consuming and error-prone1.

One mitigation for this problem we use in our team is a customized crash handling script to preserve recent pieces of rolling service logs together with the crash dump to help the crash investigation. This post demonstrates another approach: using CMKD debugger extension to recover some of the arguments via automated code analysis.

So, your 64-bit service has crashed and you are ready to find the root cause:

  • you have collected the crash dump, the executable and the symbols (the pdb file) from the crash scene;
  • you have installed the latest x64 WinDbg;
  • you have set the symbols path to your product symbols server: set _NT_SYMBOL_PATH=srv*c:\symbols*http://symbol_server_url
  • you have obtained the 64-bit CMKD.dll and put it next to the WinDbg.exe;

Start WinDbg from the folder with the dump and other files you collected, this will help WinDbg discover the symbols. Open the dump (Ctrl+D):

 Loading Dump File [C:\dump\MyApp.dmp]
 User Mini Dump File: Only registers, stack and portions of memory are available

 [ ... ]
 This dump file has an exception of interest stored in it.
 The stored exception information can be accessed via .ecxr.
 (2660.1e74): Access violation - code c0000005 (first/second chance not available)
 000007fa`4b04398b c3 ret


By default, your debugging context is set to the crashed thread, but if the crash was caused by an unhandled exception, you will not immediately see the stack trace you expect from k command or such, because the exception unwinds the stack. As WinDbg suggests, you want to first execute the .ecxr command:

0:034> .ecxr
 rax=0000000000000000 rbx=0000000000000027 rcx=00000000000dbba0
 rdx=000007f775ce1f20 rsi=0000000000000000 rdi=0000000000000000
 rip=000007f776393d90 rsp=000000000a6c5ea0 rbp=000000000a6c5f50
 r8=0000000000000067 r9=0000000000000027 r10=0000000000000000
 r11=0000000000000200 r12=000007f775ce1fb0 r13=000007f775ce1f50
 r14=000007f775ce1f20 r15=0000000000000067
 iopl=0 nv up ei pl nz na po nc
 cs=0033 ss=002b ds=002b es=002b fs=0053 gs=002b efl=00010206
 000007f7`76393d90 c70000000000 mov dword ptr [rax],0 ds:00000000`00000000=????????

This should reset the stack to the state right before the throw. You can now load the CMKD extension and use the !stack extension. Command line parameter –p will display the first four arguments for each function in the stack, parameter –t will also explain how CMKD has recovered the particular parameter, so you can double-check that each parameter was recovered correctly. Below is an excerpt from the WinDbg log comparing the arguments shown by the standard kb command versus the !stack command:

0:034> kb 6
 RetAddr : Args to Child : Call Site
 000007f7`76393d62 : 00000000`00000006 00000000`00000200 00000000`00000006 00000000`6d0ce908 : TriggerAccessViolation+0x10
 000007f7`7638fcb7 : 00000000`000dbba0 00000000`0a6c6040 00000002`f5b6e430 00000000`6d0720da : WriteInternalMiniDump+0x12
 000007f7`76223ed3 : 00000000`00000000 000007f7`75c9f340 00000002`8ded1e30 00000000`00000003 : Logger::LogV+0xe7
 000007f7`7639113d : 00000002`8ded1e30 00008173`dbcffd42 00000000`0a6c6800 00000000`00000000 : LogTraceInfo::operator()+0x33
 000007f7`763ea517 : 000007f7`75ce1f50 000007f7`75ce1f20 00000000`00000067 00000000`00000027 : Logger::LogV+0x7d
 000007f7`76223efd : 00000000`0a6c6820 00000000`0a6c6560 00000000`0a6c64d0 00000002`aaad4d00 : StringBuilder::AppendVarArg+0x277
0:034> .load cmkd
 0:034> !stack -p -t
 Call Stack : 24 frames
 ## Stack-Pointer Return-Address Call-Site
 00 000000000a6c5ea0 000007f776393d62 MyApp!sdk::Logger::TriggerAccessViolation+10
 Parameter[0] = (unknown) :
 Parameter[1] = (unknown) :
 Parameter[2] = (unknown) :
 Parameter[3] = (unknown) :
 01 000000000a6c5ec0 000007f77638fcb7 MyApp!sdk::Logger::WriteInternalMiniDump+12
 Parameter[0] = 00000000000dbba0 : rcx setup in parent frame by movb instruction @ 000007f77638fcad from immediate data
 Parameter[1] = (unknown) :
 Parameter[2] = (unknown) :
 Parameter[3] = (unknown) :
 02 000000000a6c5f00 000007f776223ed3 MyApp!sdk::Logger::LogV+e7
 Parameter[0] = 00000002353c0080 : rcx saved in current frame into NvReg r13 which is saved by child frames
 Parameter[1] = 00000002353d8dd8 : rdx saved in current frame into NvReg r14 which is saved by child frames
 Parameter[2] = 00000002353d8e20 : r8 saved in current frame into NvReg r15 which is saved by child frames
 Parameter[3] = 00000002353d8e20 : r9 saved in current frame into NvReg rbx which is saved by child frames
 03 000000000a6c61f0 000007f77639113d MyApp!sdk::LogTraceInfo::operator()+33
 Parameter[0] = 000000000a6c6290 : rcx setup in parent frame by lea instruction @ 000007f776391134 from mem @ 000000000a6c6290
 Parameter[1] = 0000000000000027 : rdx setup in parent frame by mov instruction @ 000007f776391132 from NvReg rsi which is saved by child frame
 Parameter[2] = 0000000000000005 : r8 setup in parent frame by mov instruction @ 000007f77639112b from mem @ 000000000a6c6350
 Parameter[3] = 000007f775ce1fb0 : r9 setup in parent frame by mov instruction @ 000007f776391124 from mem @ 000000000a6c6358
 04 000000000a6c6240 000007f7763ea517 MyApp!sdk::Logger::LogV+7d
 Parameter[0] = 000007f775ce1f50 : rcx setup in parent frame by lea instruction @ 000007f7763ea4ea
 Parameter[1] = 000007f775ce1f20 : rdx setup in parent frame by lea instruction @ 000007f7763ea4e3
 Parameter[2] = 0000000000000067 : r8 saved in current frame into stack
 Parameter[3] = 0000000000000027 : r9 setup in parent frame by movb instruction @ 000007f7763ea4fb from immediate data
 05 000000000a6c6330 000007f776223efd MyApp!StringBuilder::AppendVarArg+277 (perf)
 Parameter[0] = 000007f775ce1fb0 : rcx saved in current frame into NvReg rdi which is saved by child frames
 Parameter[1] = 000007f777bff60f : rdx saved in current frame into NvReg rbp which is saved by child frames
 Parameter[2] = 000000000a6c63c0 : r8 setup in parent frame by lea instruction @ 000007f776223ef3 from mem @ 000000000a6c63c0
 Parameter[3] = (unknown) :

To further explore the newly retrieved arguments, you can use WinDbg — dt MyType_Ptr* 0x000000000a6cf338 —, or continue in Visual Studio and use Watch window to interpret the memory, e.g. by casting an address like this (MyType_Ptr*)(0x000000000a6cf338). The memory address in the example is from the truncated part of the log and is only here to demonstrate the syntax of the commands one would use.

  1. More details on the x64 calling convention and techniques of the assembly-level analysis are available here: []

Three men in a boat

Probably the funniest book ever written.  Is in public domain.

How good one feels when one is full — how satisfied with ourselves and with the world! People who have tried it, tell me that a clear conscience makes you very happy and contented; but a full stomach does the business quite as well, and is cheaper, and more easily obtained. One feels so forgiving and generous after a substantial and well-digested meal — so noble-minded, so kindly-hearted. It is very strange, this domination of our intellect by our digestive organs. We cannot work, we cannot think, unless our stomach wills so. It dictates to us our emotions, our passions. After eggs and bacon, it says, “Work!” After beefsteak and porter, it says, “Sleep!” After a cup of tea (two spoonsful for each cup, and don’t let it stand more than three minutes), it says to the brain, “Now, rise, and show your strength. Be eloquent, and deep, and tender; see, with a clear eye, into Nature and into life; spread your white wings of quivering thought, and soar, a god-like spirit, over the whirling world beneath you, up through long lanes of flaming stars to the gates of eternity!” After hot muffins, it says, “Be dull and soulless, like a beast of the field — a brainless animal, with listless eye, unlit by any ray of fancy, or of hope, or fear, or love, or life.” And after brandy, taken in sufficient quantity, it says, “Now, come, fool, grin and tumble, that your fellow-men may laugh — drivel in folly, and splutter in senseless sounds, and show what a helpless ninny is poor man whose wit and will are drowned, like kittens, side by side, in half an inch of alcohol.” We are but the veriest, sorriest slaves of our stomach. Reach not after morality and righteousness, my friends; watch vigilantly your stomach, and diet it with care and judgment. Then virtue and contentment will come and reign within your heart, unsought by any effort of your own; and you will be a good citizen, a loving husband, and a tender father — a noble, pious man.

“Three Men in a Boat” by Jerome K. Jerome

Mastery is a mindset

Mastery is a mindset. According to Dweck, people can hold two different views of their own intelligence. Those who have an “entity theory” believe that intelligence is just that — an entity. It exists within us, in a finite supply that we cannot increase. Those who subscribe to an “incremental theory” take a different view. They believe that while intelligence may vary slightly from person to person, it is ultimately something that, with effort, we can increase. To analogize to physical qualities, incremental theorists consider intelligence as something like strength. (Want to get stronger and more muscular? Start pumping iron.) Entity theorists view it as something more like height. (Want to get taller? You’re out of luck.)

If you believe intelligence is a fixed quantity, then every educational and professional encounter becomes a measure of how much you have. If you believe intelligence is something you can increase, then the same encounters become opportunities for growth. In one view, intelligence is something you demonstrate; in the other, it’s something you develop. The two self-theories lead down two very different paths — one that heads toward mastery and one that doesn’t.

For instance, consider goals. Dweck says they come in two varieties—performance goals and learning goals. Getting an A in French class is a performance goal. Being able to speak French is a learning goal. “Both goals are entirely normal and pretty much universal,” Dweck says, “and both can fuel achievement.” But only one leads to mastery. In several studies, Dweck found that giving children a performance goal (say, getting a high mark on a test) was effective for relatively straightforward problems but often inhibited children’s ability to apply the concepts to new situations. For example, in one study, Dweck and a colleague asked junior high students to learn a set of scientific principles, giving half of the students a performance goal and half a learning goal. After both groups demonstrated they had grasped the material, researchers asked the students to apply their knowledge to a new set of problems, related but not identical to what they’d just studied. Students with learning goals scored significantly higher on these novel challenges. They also worked longer and tried more solutions. As Dweck writes, “With a learning goal, students don’t have to feel that they’re already good at something in order to hang in and keep trying. After all, their goal is to learn, not to prove they’re smart.

“Drive: The Surprising Truth About What Motivates Us” by Daniel H. Pink

Why meditation didn’t work for me

Tried practicing regular meditations after reading “Search Inside Yourself”, but couldn’t get into the habit. Upon further reflection I realized that I use practices similar to mindfulness meditation as part of my everyday work process, and that’s why I may not feel the value of the dedicated meditation time aside.

Mindfulness meditation is all about strong attention focus, deep awareness of the reality and crystal clear comprehension, but all of these are also part of any involved reasoning process. When I’m trying to solve a system design problem, I start by refreshing the knowledge of the relevant details, arrange them in my mental environment in some convenient order, and when all set, I just, well, stare at them in my mind until something comes up, I see a pattern in the chaos, or the complexity untangles in another way. Somewhere along that process there is a point of pure mindfulness, after all the analysis and thinking there is a moment of silent observation and the joy of understanding of some more fundamental truth.