I made a horrific mistake (worksheet formula cut and paste with wrong cell reference). The correct ratio is 31%, not 47%. Thanks to Bob Swart for pointing it out.
I should have taken a hint from my own comments in the first article... sheesh...
Still - fact is that the EUR price for Nordics still is 31% above the USD price. That is way out of wack.
You can see the calculation here.
The true cost difference for the Embarcadero webshop prices in Norwegian Kroner is shown in the rightmost blue columns.
More egg on my face at CodeGear forums: https://forums.codegear.com/thread.jspa?threadID=1589
Delphi Programming - Real programmers write comments mostly in or about other peoples code.
Delphi Programming
and software in general.Tuesday, August 26, 2008
Delphi 2009 Pricing - Euro Markup = 31%
-- start edit --
I made a horrific mistake (worksheet formula cut and paste with wrong cell reference). The correct ratio is 31%, not 47%.
I should have taken a hit from my own comment on the 1.31 ratio... sheesh...
Worksheet updated.
Obviously the 1 USD = 1 EURO assumption was completely false.
-- end edit --
Embarcadero / Codegear are still treating 1 USD = 1 EURO. In Norway, the exchange rates give a markup ratio of 1.47 times. Personally, I think that is rather stiff. You can see the calculation here.
The middle example with the 1.31 ratio is due to Norwegian EUR/USD currency exchange margins (DnB NOR Bank). The true cost difference for the Embarcadero webshop prices in Norwegian Kroner is shown in the rightmost blue columns.
Tentative prices from one local reseller indicates a markup ratio of 1.33, comparing reseller out price to US webshop price. Why does this feel like being ripped off? When buying software online, I am used to getting the same price as the rest of the world.
Posted on CodeGear forums: https://forums.codegear.com/thread.jspa?threadID=1589
I made a horrific mistake (worksheet formula cut and paste with wrong cell reference). The correct ratio is 31%, not 47%.
I should have taken a hit from my own comment on the 1.31 ratio... sheesh...
Worksheet updated.
Obviously the 1 USD = 1 EURO assumption was completely false.
-- end edit --
Embarcadero / Codegear are still treating 1 USD = 1 EURO. In Norway, the exchange rates give a markup ratio of 1.47 times. Personally, I think that is rather stiff. You can see the calculation here.
The middle example with the 1.31 ratio is due to Norwegian EUR/USD currency exchange margins (DnB NOR Bank). The true cost difference for the Embarcadero webshop prices in Norwegian Kroner is shown in the rightmost blue columns.
Tentative prices from one local reseller indicates a markup ratio of 1.33, comparing reseller out price to US webshop price. Why does this feel like being ripped off? When buying software online, I am used to getting the same price as the rest of the world.
Posted on CodeGear forums: https://forums.codegear.com/thread.jspa?threadID=1589
Tuesday, August 19, 2008
FDCLib - Wizards without the black magic - part 1
It is time to get started with another contribution to the FDCLib.
Frames are great, but they can require a bit of fiddling to work well. The plan is to create a simple system to organize frames for tabbed views and wizards and simplify some of the design problems related to visual inheritance as well as class inheritance.
Design goals
1a. Each frame must not need to descend from the same class
1b. It must still be possible to extend frames by inheritance
2a. Each frame should not need to contain extensive knowledge about other frames
2b. Each frame must be able to obtain knowledge about/from other frames
3. Each frame should have (optional) input validation
4. It must be possible to make reusable frames such as file pickers, etc.
5. It must be possible to use the frames in different contexts (modal or modeless, window or dialog)
Using a similar approach to what I did for the grid controller, I will wrap myself around a basic TFrame and grab hold of the hooks that exist and move the business logic into a non-visual class known as TFrameWrapper.
To organize these non-visual classes, I will create a TFrameController. The frame controller will hold the knowledge about how the frames are to be presented, organized, and navigated.
Teaser: The frame controller will be reused in a future part of FDCLib. What part of the application do you usually need first, but often end up writing last? No, not the documentation :P - we are talking about code after all :)
Before I move on - I'd like gather some intel about how you are using frames today.
Your comments will be greatly appreciated!
Frames are great, but they can require a bit of fiddling to work well. The plan is to create a simple system to organize frames for tabbed views and wizards and simplify some of the design problems related to visual inheritance as well as class inheritance.
Design goals
1a. Each frame must not need to descend from the same class
1b. It must still be possible to extend frames by inheritance
2a. Each frame should not need to contain extensive knowledge about other frames
2b. Each frame must be able to obtain knowledge about/from other frames
3. Each frame should have (optional) input validation
4. It must be possible to make reusable frames such as file pickers, etc.
5. It must be possible to use the frames in different contexts (modal or modeless, window or dialog)
Using a similar approach to what I did for the grid controller, I will wrap myself around a basic TFrame and grab hold of the hooks that exist and move the business logic into a non-visual class known as TFrameWrapper.
To organize these non-visual classes, I will create a TFrameController. The frame controller will hold the knowledge about how the frames are to be presented, organized, and navigated.
Teaser: The frame controller will be reused in a future part of FDCLib. What part of the application do you usually need first, but often end up writing last? No, not the documentation :P - we are talking about code after all :)
Before I move on - I'd like gather some intel about how you are using frames today.
Your comments will be greatly appreciated!
Friday, August 15, 2008
Google Custom Search and Delphi
There is some commotion in the new discussion groups at Embarcadero. Many people miss the old groups and complain about their disappearance. Some time back I was toying with Google Custom Search, and I thought I'd see if that could be used for Delphi searching in the new groups as well as the old - and also including other Delphi relevant sites.
I have attempted to add some Delphi relevant sites to the index, and currently have configured the search to (Edited) allow hits outside only the specified sites. It would be useful to explicitly add more sites, but this is just an experiment.
Give it a try and see if you get what you expect...
You can also contribute to the configuration.
P.S. AdSense is NOT enabled, and I will NOT enable it. Ideally, Embarcadero should create and maintain the custom search, allowing it's community to contribute.
Edit: Please not that this was a very quick and dirty configuration. Given more time and experience with the CSE, it should be possible to get better results.
I have attempted to add some Delphi relevant sites to the index, and currently have configured the search to (Edited) allow hits outside only the specified sites. It would be useful to explicitly add more sites, but this is just an experiment.
Give it a try and see if you get what you expect...
P.S. AdSense is NOT enabled, and I will NOT enable it. Ideally, Embarcadero should create and maintain the custom search, allowing it's community to contribute.
Edit: Please not that this was a very quick and dirty configuration. Given more time and experience with the CSE, it should be possible to get better results.
Tuesday, August 12, 2008
repeat Mistake until LessonLearned;
If we indeed do learn from our mistakes, I should have been a genius by now. Instead I feel silly for forgetting to do the most obvious checks and the most trivial safety measures. And I don't only do it once - I do it all the time!
Coding starts with an idea, some thinking about implications, hopefully some deliberation on requirements, possibly the choice of some pattern for implementation and maybe even some kind of plan for testing that it all work according to plan. Even if we put all of the above into action, we will most likely repeat some common coding mistakes.
"[A]nd then it occurred to me that a computer is a stupid machine with the ability to do incredibly smart things, while computer programmers are smart people with the ability to do incredibly stupid things. They are, in short, a perfect match."
~Bill Bryson
Here are some of my more typical causes of involuntary visits in the debugger.
I know that many of my mistakes stem from the brain being ahead of the fingers in the typing process. I have the idea all figured out in my head, and now I want to see it in action. In my desire to "save time" - I convince myself that I can add the checks later, but for now I can get away with putting up just the basic scaffolding and leave the safety rails for later. Naturally, when later comes - I am already on a different problem, taking the same shortcuts.
It boils down to bad habits.
Bart Roozendaal wrote about pre and post-conditions yesterday, and although these may save my bacon to a certain degree - it still boils down to the same issue: i.e. my failure to take time to actually implement the safety measures immediately, and instead postponing them until later.
So... what should I do to catch my most common mistakes? Pick up some good habits and stick with them. Time spent early will be saved ten-fold later.
Here are some suggestions:
• Forgetting to check for assignment (NIL pointer) and Invalid typecasts
Assert. Assert. Assert. Make sure that you qualify every reference before you use it.
In the context of Bart's post, maybe pre/post requirements would be even more useful if they would generate tips (lightweight hints?) at the location where the method (which have the requirements) is called.
• Dangling pointer (Non-NIL, but invalid)
FreeAndNil(ObjectRef); Just do it. Avoid passing and keeping non-volatile copies of the reference. If you really need the reference, implement some sort of common storage that encapsulate the object(s) you need to reference i.e. create a secure reference to a reference. Keep in mind that the such common storage should exist before and after the life of which ever object it references.
• Index/access out of string/array bound
Take a minute and check the boundary cases - what happens at the minimum and maximum index? Will index +/- length exceed the boundary?
• Numeric value or enumeration value out of range, Boolean evaluation mistake, Numeric evalution mistake (implicit cast accuracy problem)
Be explicit. Adding more brackets and explicit casts will not slow down your code. Break down the expression logic into smaller parts for clarity if you need to.
One book that is a veritable goldmine of healthy coding advice is Code Complete (2nd ed.) by Steve McConnell. The book's advice is generally language agnostic, and you will most likely recognize much of the advice given as common sense - but the real value is to use the book to remind yourself of using that common sense. If I had read this book as a student, I would have saved myself a lot of trouble.
If you don't wanna shop for books - do as Lars D and explain the code to your favorite Teddybear.
"What I mean is that if you really want to understand something, the best way is to try and explain it to someone else. That forces you to sort it out in your own mind. And the more slow and dim-witted your pupil, the more you have to break things down into more and more simple ideas. And that's really the essence of programming. By the time you've sorted out a complicated idea into little steps that even a stupid machine can deal with, you've certainly learned something about it yourself."
~Douglas Adams
Coding starts with an idea, some thinking about implications, hopefully some deliberation on requirements, possibly the choice of some pattern for implementation and maybe even some kind of plan for testing that it all work according to plan. Even if we put all of the above into action, we will most likely repeat some common coding mistakes.
"[A]nd then it occurred to me that a computer is a stupid machine with the ability to do incredibly smart things, while computer programmers are smart people with the ability to do incredibly stupid things. They are, in short, a perfect match."
~Bill Bryson
Here are some of my more typical causes of involuntary visits in the debugger.
I know that many of my mistakes stem from the brain being ahead of the fingers in the typing process. I have the idea all figured out in my head, and now I want to see it in action. In my desire to "save time" - I convince myself that I can add the checks later, but for now I can get away with putting up just the basic scaffolding and leave the safety rails for later. Naturally, when later comes - I am already on a different problem, taking the same shortcuts.
It boils down to bad habits.
Bart Roozendaal wrote about pre and post-conditions yesterday, and although these may save my bacon to a certain degree - it still boils down to the same issue: i.e. my failure to take time to actually implement the safety measures immediately, and instead postponing them until later.
So... what should I do to catch my most common mistakes? Pick up some good habits and stick with them. Time spent early will be saved ten-fold later.
Here are some suggestions:
• Forgetting to check for assignment (NIL pointer) and Invalid typecasts
Assert. Assert. Assert. Make sure that you qualify every reference before you use it.
In the context of Bart's post, maybe pre/post requirements would be even more useful if they would generate tips (lightweight hints?) at the location where the method (which have the requirements) is called.
• Dangling pointer (Non-NIL, but invalid)
FreeAndNil(ObjectRef); Just do it. Avoid passing and keeping non-volatile copies of the reference. If you really need the reference, implement some sort of common storage that encapsulate the object(s) you need to reference i.e. create a secure reference to a reference. Keep in mind that the such common storage should exist before and after the life of which ever object it references.
• Index/access out of string/array bound
Take a minute and check the boundary cases - what happens at the minimum and maximum index? Will index +/- length exceed the boundary?
• Numeric value or enumeration value out of range, Boolean evaluation mistake, Numeric evalution mistake (implicit cast accuracy problem)
Be explicit. Adding more brackets and explicit casts will not slow down your code. Break down the expression logic into smaller parts for clarity if you need to.
One book that is a veritable goldmine of healthy coding advice is Code Complete (2nd ed.) by Steve McConnell. The book's advice is generally language agnostic, and you will most likely recognize much of the advice given as common sense - but the real value is to use the book to remind yourself of using that common sense. If I had read this book as a student, I would have saved myself a lot of trouble.
If you don't wanna shop for books - do as Lars D and explain the code to your favorite Teddybear.
"What I mean is that if you really want to understand something, the best way is to try and explain it to someone else. That forces you to sort it out in your own mind. And the more slow and dim-witted your pupil, the more you have to break things down into more and more simple ideas. And that's really the essence of programming. By the time you've sorted out a complicated idea into little steps that even a stupid machine can deal with, you've certainly learned something about it yourself."
~Douglas Adams
Monday, August 11, 2008
Optimization - Understand your hardware
Paul (of snippet formatting fame) mentioned that he was comfortable with his coding style, but that he would like to learn more about optimizing his code.
The generally recognized and recommended approach to optimization is that you never optimize code until you really must. The reasoning behind this is that you need actual running code before you can pinpoint the real bottlenecks and there is no point in spending time on optimizing code that is run once every blue moon, since you hardly will have any significant gain in performance.
The Delphi compiler is very good at optimizing code. Most of the time, you would be hard pressed to write better assembly code than what the compiler produces.
However, we are the ones that put down the premises for how it will organize and access data. If we tell the compiler to work the rows or the columns, the compiler generally will have to do what we say. We decide data size, data organization, and data access. The best kind of optimization we can do is to think hard about how we organize and access our data.
Once upon a time, before computers became a household appliance, around the time when the computing jungle still reeked of digital dino-dung, I decided to pursue a career in designing microprocessors. This was before I realized that it was - relatively speaking - cheaper to make mistakes in software than in hardware.
I learned some useful lessons on accessing data the hardware way, though. Accessing memory is costly! Very costly! You should avoid it if you can! :)
The ridiculously simple example is that if you store your data in rows, it will potentially be very costly to access it column by column. If you need to process large amounts of data in memory (array / matrix / SSD type operations), the size and alignment of your data matter more than you think.
Here is a small project that you can play around with to see effects of data alignment. The demo allocates 10.000 items at the size of 512 bytes as one block, and adds an offset (0 .. 25) from the original block start to the item address. Each block gets a key value from a Random double; Random seed is reset for each offset iteration. Each fill and sort is repeated 5 times to try to avoid random interference from the OS etc. The "read and write" of the data for sorting, doesn't actually move all the data, just the key (Double) and a simulated move of the block size into a local buffer. A fullblown sort with data move would add a few move moves, which would add more boundary overhead.
This is average fill+sort time (raw TDateTime value). Notice the difference on 8 byte boundaries.
Well known C++ guru, Herb Sutter held a presentation "Machine Architecture: Things Your Programming Language Never Told You" for the North West C++ Users Group. He can convey this knowledge far more entertaining and far more effective than I can. BTW, he also have several interesting articles on concurrency.
To quote the notes on the video: High-level languages insulate the programmer from the machine. That’s a wonderful thing -- except when it obscures the answers to the fundamental questions of “What does the program do?” and “How much does it cost?” The C++/C#/Java programmer is less insulated than most, and still we find that programmers are consistently surprised at what simple code actually does and how expensive it can be -- not because of any complexity of a language, but because of being unaware of the complexity of the machine on which the program actually runs. This talk examines the “real meanings” and “true costs” of the code we write and run especially on commodity and server systems, by delving into the performance effects of bandwidth vs. latency limitations, the ever-deepening memory hierarchy, the changing costs arising from the hardware concurrency explosion, memory model effects all the way from the compiler to the CPU to the chipset to the cache, and more -- and what you can do about them.
Herb Sutter is an excellent speaker, so find a quiet spot, pull out the popcorn, and set aside 1 hour and 56 minutes for a very entertaining and very enlightening session on the true cost of accessing memory.
Video link: http://video.google.co.uk/videoplay?docid=-4714369049736584770
The generally recognized and recommended approach to optimization is that you never optimize code until you really must. The reasoning behind this is that you need actual running code before you can pinpoint the real bottlenecks and there is no point in spending time on optimizing code that is run once every blue moon, since you hardly will have any significant gain in performance.
The Delphi compiler is very good at optimizing code. Most of the time, you would be hard pressed to write better assembly code than what the compiler produces.
However, we are the ones that put down the premises for how it will organize and access data. If we tell the compiler to work the rows or the columns, the compiler generally will have to do what we say. We decide data size, data organization, and data access. The best kind of optimization we can do is to think hard about how we organize and access our data.
Once upon a time, before computers became a household appliance, around the time when the computing jungle still reeked of digital dino-dung, I decided to pursue a career in designing microprocessors. This was before I realized that it was - relatively speaking - cheaper to make mistakes in software than in hardware.
I learned some useful lessons on accessing data the hardware way, though. Accessing memory is costly! Very costly! You should avoid it if you can! :)
The ridiculously simple example is that if you store your data in rows, it will potentially be very costly to access it column by column. If you need to process large amounts of data in memory (array / matrix / SSD type operations), the size and alignment of your data matter more than you think.
Here is a small project that you can play around with to see effects of data alignment. The demo allocates 10.000 items at the size of 512 bytes as one block, and adds an offset (0 .. 25) from the original block start to the item address. Each block gets a key value from a Random double; Random seed is reset for each offset iteration. Each fill and sort is repeated 5 times to try to avoid random interference from the OS etc. The "read and write" of the data for sorting, doesn't actually move all the data, just the key (Double) and a simulated move of the block size into a local buffer. A fullblown sort with data move would add a few move moves, which would add more boundary overhead.
This is average fill+sort time (raw TDateTime value). Notice the difference on 8 byte boundaries.
Well known C++ guru, Herb Sutter held a presentation "Machine Architecture: Things Your Programming Language Never Told You" for the North West C++ Users Group. He can convey this knowledge far more entertaining and far more effective than I can. BTW, he also have several interesting articles on concurrency.
To quote the notes on the video: High-level languages insulate the programmer from the machine. That’s a wonderful thing -- except when it obscures the answers to the fundamental questions of “What does the program do?” and “How much does it cost?” The C++/C#/Java programmer is less insulated than most, and still we find that programmers are consistently surprised at what simple code actually does and how expensive it can be -- not because of any complexity of a language, but because of being unaware of the complexity of the machine on which the program actually runs. This talk examines the “real meanings” and “true costs” of the code we write and run especially on commodity and server systems, by delving into the performance effects of bandwidth vs. latency limitations, the ever-deepening memory hierarchy, the changing costs arising from the hardware concurrency explosion, memory model effects all the way from the compiler to the CPU to the chipset to the cache, and more -- and what you can do about them.
Herb Sutter is an excellent speaker, so find a quiet spot, pull out the popcorn, and set aside 1 hour and 56 minutes for a very entertaining and very enlightening session on the true cost of accessing memory.
Video link: http://video.google.co.uk/videoplay?docid=-4714369049736584770
Thursday, August 7, 2008
Writing Readable Code - Paul's Snippet Rewritten
Five brave people have contributed rewritten versions of Paul's code. Let's take a look at their approach.
I reformatted Paul's example using two spaces for every indent level (after then, begin, else, etc. ), hoping that this is the way he originally wrote it.
IMO, Paul's formatting (or my assumptions about his formatting) does not reflect the true code path, and trying to decipher the code paths become unnecessarily hard.
• What codes are run by "ABCDEFG"?
• What codes are run by "aBcdefg"?
• What string(s) would make code(6) run?
TS contributed a very nicely formatted version, which is effective in guiding us through the potential code paths. I like his style.
SS didn't quite manage to reflect the flow in the code and his formatting is similar to Paul's code.
Jolyon scores high on restructuring and simplifying, but he made one mistake in his change. Under which condition will his code behave differently form the original?
AO goes even further than Jolyon in reformatting and comments: As I believe that proper formatting is not the only way to increase readability of code, I also refactored it a bit to reduce nesting where possible. If this were an actual code, I would have probably gone even further and introduced separate routines for the individual blocks, depending on their complexity.
This is readable, but personally I think AO went a bit too far. Like Jolyon, he also missed a structural detail in the refactoring and broke the code. Which condition state(s) will cause the code to misbehave?
I am divided on the use of Exit. It can add clarity, but it can also be a big problem as you leave a lot of code "dangling". If you decide to move the exit - you have to be really careful to ensure that any states that suddenly move in or out of scope behave correctly. If there is a nesting church and a chunking church, I'm probably a "Nestorian".
I do agree that refactoring is a valuable tool to clarify and simplify, and we should make an effort to break down our code into manageable blocks, but in this particular case it probably isn't necessary.
Another thing: I avoid using {curly braces} for in-code commentary and use // instead. Why? Because if you need to comment out code containing select count(*) from table, those curlies will work where the (* comment *) fail. Should CodeGear add support for comment nesting? I don't know...
Anyways...
Here's how I would format the example. This is very similar to TS's example, except that I showel all the reserved words to the left side to leave the logic more visible and commentable, but I also add indentation to the innermost conditional code.
MJ adds a contribution with the following comment: "To be sure not to create any future bugs you should consider adding a begin end section after every if statement even if it is not required, but that will make the code more unreadable."
Good Points! In all honesty, I also screwed up the code blocks on first try. Conditional code will bite you if you are not very very careful. You should indeed think about what may happen if you need to add more code and/or conditions. Personally, I don't think a few more enclosures makes the code less readable. Here is how I would be more explicit in the use of enclosures to make the code less ambiguous.
There is no one true correct way of formatting.
The point I am trying to make is that code structure matter for understanding the code at first glance, and we should be mindful about how we lay it out. We should strive for consistency, but always keep clarity and unambiguity as priority one. Bend your rules, if you need to.
Paul, Thank you for creating such a devious code snippet!
P.S. If you haven't figured out the answers yet, load up your Delphi and run all the examples in StructureDemo.dpr.
No peeking until you have some suggestions! :)
(Hint: else starts a new block)
Edit: Added an example of using Exits instead of nesting to the initial post on formatting. It would be interesting to see DelphiFreak have a go at Paul's snippet.
I reformatted Paul's example using two spaces for every indent level (after then, begin, else, etc. ), hoping that this is the way he originally wrote it.
IMO, Paul's formatting (or my assumptions about his formatting) does not reflect the true code path, and trying to decipher the code paths become unnecessarily hard.
tryHere is a small exercize. If we say that the string "ABCDEFG" are all conditions TRUE, and the string "abcdefg" are all conditions false:
if ConditionA then
Code(1)
else if ConditionB then
if ConditionC then begin
if ConditionD then
code(2);
code(3);
end
else if ConditionE then begin
code(4);
morecode('X');
if ConditionF then
code(5)
else if ConditionG then
code(6);
end else begin
code(7);
morecode('Y');
end;
finally
code(8);
end;
• What codes are run by "ABCDEFG"?
• What codes are run by "aBcdefg"?
• What string(s) would make code(6) run?
TS contributed a very nicely formatted version, which is effective in guiding us through the potential code paths. I like his style.
try
if ConditionA then
Code(1)
else if ConditionB then
if ConditionC then
begin
if ConditionD then
code(2);
code(3);
end // without the semi-colon
else if ConditionE then
begin
code(4);
morecode('X');
if ConditionF then
code(5)
else if ConditionG then
code(6);
end
else
begin
code(7);
morecode('Y');
end;
finally
code(8);
end;
SS didn't quite manage to reflect the flow in the code and his formatting is similar to Paul's code.
try
if ConditionA then
Code(1)
else if ConditionB then
if ConditionC then
begin
if ConditionD then
code(2);
code(3);
end //; I assume this semi-colon has to go
else if ConditionE then
begin
code(4);
morecode('X');
if ConditionF then
code(5)
else if ConditionG then
code(6);
end
else
begin
code(7);
morecode('Y');
end;
finally
code(8);
end;
Jolyon scores high on restructuring and simplifying, but he made one mistake in his change. Under which condition will his code behave differently form the original?
procedure WhenC;
begin
if ConditionD then
code(2);
code(3);
end;
procedure WhenE;
begin
code(4);
morecode('X');
if ConditionF then
code(5)
else if ConditionG then
code(6);
end;
begin
try
if ConditionA then
Code(1)
else if NOT ConditionB then
EXIT;
if ConditionC then
WhenC
else if ConditionE then
WhenE
else
begin
code(7);
morecode('Y');
end;
finally
code(8);
end;
AO goes even further than Jolyon in reformatting and comments: As I believe that proper formatting is not the only way to increase readability of code, I also refactored it a bit to reduce nesting where possible. If this were an actual code, I would have probably gone even further and introduced separate routines for the individual blocks, depending on their complexity.
try
{ This comment explains why ConditionA is handled first. }
if ConditionA then begin
Code(1);
Exit;
end;
{ This comment explains why the aggregate of ConditionB and ConditionC is handled next. }
if ConditionB and ConditionC then begin
{ This comment explains why ConditionD is relevant only in this block. }
if ConditionD then
code(2);
code(3);
Exit;
end;
{ This comment explains why ConditionE is handled next. }
if ConditionE then begin
code(4);
morecode('X');
{ This comment explains why ConditionF and ConditionG are relevant only in this block. }
if ConditionF then
code(5)
else if ConditionG then
code(6);
Exit;
end;
{ This comment explains the default handling. }
code(7);
morecode('Y');
finally
{ This comment explains why the following code must execute no matter what. }
code(8);
end;
This is readable, but personally I think AO went a bit too far. Like Jolyon, he also missed a structural detail in the refactoring and broke the code. Which condition state(s) will cause the code to misbehave?
I am divided on the use of Exit. It can add clarity, but it can also be a big problem as you leave a lot of code "dangling". If you decide to move the exit - you have to be really careful to ensure that any states that suddenly move in or out of scope behave correctly. If there is a nesting church and a chunking church, I'm probably a "Nestorian".
I do agree that refactoring is a valuable tool to clarify and simplify, and we should make an effort to break down our code into manageable blocks, but in this particular case it probably isn't necessary.
Another thing: I avoid using {curly braces} for in-code commentary and use // instead. Why? Because if you need to comment out code containing select count(*) from table, those curlies will work where the (* comment *) fail. Should CodeGear add support for comment nesting? I don't know...
Anyways...
Here's how I would format the example. This is very similar to TS's example, except that I showel all the reserved words to the left side to leave the logic more visible and commentable, but I also add indentation to the innermost conditional code.
try
if ConditionA
then Code(1)
else if ConditionB
then if ConditionC
then begin
if ConditionD
then code(2);
code(3);
end
else if ConditionE
then begin
code(4);
morecode('X');
if ConditionF
then code(5)
else if ConditionG
then code(6);
end
else begin
code(7);
morecode('Y');
end;
finally
code(8);
end;
MJ adds a contribution with the following comment: "To be sure not to create any future bugs you should consider adding a begin end section after every if statement even if it is not required, but that will make the code more unreadable."
try
if ConditionA then
Code(1)
else if ConditionB then
begin
if ConditionC then
begin
if ConditionD then
code(2);
code(3);
end
else if ConditionE then
begin
code(4);
morecode('X');
if ConditionF then
code(5)
else if ConditionG then
code(6);
end
else
begin
code(7);
morecode('Y');
end;
end;
finally
code(8);
end;
Good Points! In all honesty, I also screwed up the code blocks on first try. Conditional code will bite you if you are not very very careful. You should indeed think about what may happen if you need to add more code and/or conditions. Personally, I don't think a few more enclosures makes the code less readable. Here is how I would be more explicit in the use of enclosures to make the code less ambiguous.
try
if ConditionA
then Code(1)
else begin
if ConditionB
then begin
if ConditionC
then begin
if ConditionD
then code(2);
code(3);
end
else begin
if ConditionE
then begin
code(4);
morecode('X');
if ConditionF
then code(5)
else if ConditionG
then code(6);
end
else begin
code(7);
morecode('Y');
end;
end;
end;
end;
finally
code(8);
end;
There is no one true correct way of formatting.
The point I am trying to make is that code structure matter for understanding the code at first glance, and we should be mindful about how we lay it out. We should strive for consistency, but always keep clarity and unambiguity as priority one. Bend your rules, if you need to.
Paul, Thank you for creating such a devious code snippet!
P.S. If you haven't figured out the answers yet, load up your Delphi and run all the examples in StructureDemo.dpr.
No peeking until you have some suggestions! :)
(Hint: else starts a new block)
Edit: Added an example of using Exits instead of nesting to the initial post on formatting. It would be interesting to see DelphiFreak have a go at Paul's snippet.
Writing readable code - Comment for Context
While waiting for more suggestions to Paul's snippet, and while weaving and dodging through the virtual fireballs from the previous posts on code formatting and comments, I fearlessly continue my Don Quixote Crusade for readable code. I am not quite done with comments yet.
It is said that "Real programmers don't write comments - It was hard to write - it should be hard to read". That doctrine is way overdue for deletion.
Comments are like garlic.
Too little and the result is bland and unpalatable, too much and it stinks things up. Comments should only in rare occasions assume that the reader is too dumb to read the code properly, so as a general rule - we should not rewrite our code in plain english, line for line. Such an approach tend to fog up [sic] the source, and it becomes a drag to maintain (Yeah, I know... garlic metaphors stink...).
Comments are like news headlines.
// File ready for saving!
// No more filehandles, says OS
// Raises exception for World+dog
They should be short, concise and to the point. In good tabloid tradition, they should also only touch on the very general points, and oversimplifying what really is going on.
Comments are like foreplay.
They serve to get us readers in the mood to appreciate the sleek lines, the richness of it's properties, and possibly arouse our interest to the point where we are ready to ravage the code.
Comments are like, bi-se... uh directional.
Err, well ... what I am trying to say is that sometimes we can write them before we do stuff, while at other times - it can be just as effective to write them after something has been done in the code. The first would probably indicate how we are going to do something, while the latter would focus on the result of what we just did.
Pick up the garbage!
Do you leave your workshop tools lying about in case you need them quickly, or do you store them safely away? Once you finalize your code, clean it up. Put "I deleted this, moved that, added that" comments in your version control commit/check-in comments, and don't litter the source code with it. Once the code has been created or removed, the reason is uninteresting. What the code is supposed to do, is so much more valuable information. Remove unnecessary comments.
Don't spread FUD.
Your commentary should embellish the correctness and function of your code, not underline how insecure, unexpectedly behaving, mysterious, or potentially unreliable code it is. If there is remaining work to be done, that should be written as a To-Do point, containing sufficient information to guide you to a starting point for that work.
It takes practice to write good commentary (life-long practice, some would say), but unless your code is totally clear and unambiguous to the point of self-explaining to your grandmother, you should probably add some comments.
To round it off, here are a couple of disturbing - but funny - comments I have come across in production code:
The second one is history, due to a rewrite.
It is said that "Real programmers don't write comments - It was hard to write - it should be hard to read". That doctrine is way overdue for deletion.
Comments are the context frame of our code.
Too little and the result is bland and unpalatable, too much and it stinks things up. Comments should only in rare occasions assume that the reader is too dumb to read the code properly, so as a general rule - we should not rewrite our code in plain english, line for line. Such an approach tend to fog up [sic] the source, and it becomes a drag to maintain (Yeah, I know... garlic metaphors stink...).
Comments are like news headlines.
// File ready for saving!
// No more filehandles, says OS
// Raises exception for World+dog
They should be short, concise and to the point. In good tabloid tradition, they should also only touch on the very general points, and oversimplifying what really is going on.
Comments are like foreplay.
They serve to get us readers in the mood to appreciate the sleek lines, the richness of it's properties, and possibly arouse our interest to the point where we are ready to ravage the code.
Comments are like, bi-se... uh directional.
Err, well ... what I am trying to say is that sometimes we can write them before we do stuff, while at other times - it can be just as effective to write them after something has been done in the code. The first would probably indicate how we are going to do something, while the latter would focus on the result of what we just did.
Pick up the garbage!
Do you leave your workshop tools lying about in case you need them quickly, or do you store them safely away? Once you finalize your code, clean it up. Put "I deleted this, moved that, added that" comments in your version control commit/check-in comments, and don't litter the source code with it. Once the code has been created or removed, the reason is uninteresting. What the code is supposed to do, is so much more valuable information. Remove unnecessary comments.
Don't spread FUD.
Your commentary should embellish the correctness and function of your code, not underline how insecure, unexpectedly behaving, mysterious, or potentially unreliable code it is. If there is remaining work to be done, that should be written as a To-Do point, containing sufficient information to guide you to a starting point for that work.
It takes practice to write good commentary (life-long practice, some would say), but unless your code is totally clear and unambiguous to the point of self-explaining to your grandmother, you should probably add some comments.
To round it off, here are a couple of disturbing - but funny - comments I have come across in production code:
Should I or should I not uncomment the first comment? Well, it has been inactive for 7 years, so I think I'll leave it as is, or remove it.
// ---- Better let sleeping dogs lie? Note the date. ----
try
if ValidParentForm(self) <> nil then
begin
ValidParentForm(Self).ActiveControl:=NIL;
//Turn this back on when I return from my holiday and have time to test it
//N.N. 20.07.2001
{if ValidParentForm(Self).ActiveControl=lFocusedControl then
//Control doesn't let go of focus, most likely because of some error message
exit; //Aborting the routine}
end;
if DataSet is TExtendedDataSet then
TExtendedDataSet(DataSet).Save
else
DataSet.Post;
finally
. . .
// ---- Unusual if/then/else construct ----
if (EditSomeProp.Text <> '')
and ((RequestTable[i].NUMREF+1) > StrToInt(EditSomeProp.Text)) then
// Watch out for Elsie here
else
begin
. . .
The second one is history, due to a rewrite.
Writing Readable Code - Paul's Snippet - Show your formatting!
Paul made an comment in my previous post, but unfortunately the Blogger comment system ate all the whitespace. Paul, if you read this - please reformat the code below and email it to me as a zipped attachment! I would also love to receive everybody else's reformatted version as well!
Your anonymity is guaranteed (Unless you explicitly permit me to name you and/or link to a site or something).
Please note that there was a syntax problem in the original snippet, and I have indicated that below. I suggest that unless Paul instruct us otherwise, we take out the semi-colon after the end on that line.
Your anonymity is guaranteed (Unless you explicitly permit me to name you and/or link to a site or something).
Please note that there was a syntax problem in the original snippet, and I have indicated that below. I suggest that unless Paul instruct us otherwise, we take out the semi-colon after the end on that line.
try
if ConditionA then
Code(1)
else if ConditionB then
if ConditionC then begin
if ConditionD then
code(2);
code(3);
end; // I assume this semi-colon has to go
else if ConditionE then begin
code(4);
morecode('X');
if ConditionF then
code(5)
else if ConditionG then
code(6);
end else begin
code(7);
morecode('Y');
end;
finally
code(8);
end;
Wednesday, August 6, 2008
Writing Readable Code - Formatting and Comments
I guess there are at least as many coding styles as there are programmers times programming languages. Many of us are conformists, and some of us have our highly personal style, but most of us probably fail at being 100% consistant. Code formatting can probably be considered a slightly religious area, so please feel free to disagree with me :)
So - why bother to put any effort into the formatting anyways? It is not like the compiler care? I mean, apart from the obvious requirement that things should be reasonable readable and not just a jumble of code?
Layout matters. It is significantly easier to pick up code that is consistant in style than wading through spaghetti (or gnocchi) code where the indentation has not been considered much.
I have noticed that my personal style with regards to block formatting definitively is non-conformist. Whenever there is a do, if/then/else, with or without related begin/end, I will probably annoy the heck out of the conformists.
My formatting goals:
• I want clear indication of flow
• I want to separate conditions from actions
• I want room to comment
In the examples below, I will place comments in the individualist examples to give an indication of why I choose to wrap/indent in this way rather than the "global standard"
Let's start with the do in with and the for-loop (I do at times, albeit rarely, confess to the occasional use of with).
The if/then/else statement have too many combinations to even get close to cover them all, so I am only going to do a handful.
C++/C# and Java people likes to rant about how ugly and verbose our begin / end's are, and I am sure I could rile myself up over their curly brace enclosures and some of the religion connected to those too - but I am not going to go there. However, there are a few things about our enclosures that may be worth thinking over.
Some like to dangle their end's at various indentation levels. I prefer to align it with the matching start of the block (do begin, then begin). Why? It makes it is easier to identify the code path in relation to the condition.
More enclosures doesn't always mean better readability. Here is an example from Indy9(IdTunnelCommon.pas). Kudzu, I love you - but I disagree with your code style in this file :)
(I know I am going to get flamed for this...^^)
So what do I do? • I assume the except block was for debugging, but I'm taking it out. • I might be changing the behaviour by setting two default values up top, but at least there is no doubt about their default value. • But what is with the conditional assignments? Please! BooleanVariable := Expression; !! Don't go "then true else false" on me! • Reworked the comments.
Personally, I think it is more readable like this.
Next: I'll be rambling a little more on effective comments, naming and structure.
Edit - added after Paul's Snippet Rewritten: Delphifreak likes to exit early. He also likes to initialize up top, and I totally agree - that is a good practice. Exit works well in this example where all the conditions are dependant on the previous one. IMO, things get a lot more hairy if there are multiple conditions with alternatives (if/then/else if). For this example, Exits are not bad.
So - why bother to put any effort into the formatting anyways? It is not like the compiler care? I mean, apart from the obvious requirement that things should be reasonable readable and not just a jumble of code?
Layout matters. It is significantly easier to pick up code that is consistant in style than wading through spaghetti (or gnocchi) code where the indentation has not been considered much.
I have noticed that my personal style with regards to block formatting definitively is non-conformist. Whenever there is a do, if/then/else, with or without related begin/end, I will probably annoy the heck out of the conformists.
My formatting goals:
• I want clear indication of flow
• I want to separate conditions from actions
• I want room to comment
In the examples below, I will place comments in the individualist examples to give an indication of why I choose to wrap/indent in this way rather than the "global standard"
Let's start with the do in with and the for-loop (I do at times, albeit rarely, confess to the occasional use of with).
// Conformist
for i := 1 to 5 do Something;
for i := 1 to 5 do begin
Something;
SomethingMore;
end;
with SomeObject do begin
Something;
SomethingMore;
end;
// Individualist
for i := 1 to 5 // For selected range
do Something(i); // get stuff done
for i := 0 to (count - 1) // For every item
do begin
Something(i); // Step 1
SomethingMore(i); // Step 2
end;
with SomeObject // Focusing on this specific item,
do begin // I can add this comment for clarity
Something;
SomethingMore;
end;
The if/then/else statement have too many combinations to even get close to cover them all, so I am only going to do a handful.
// Conformist
if (SomeVariable = Condition) and (SomeOtherExpression) then
PerformSomeRoutine;
if (SomeVariable = Condition) and (SomeOtherExpression) then begin
Code;
MoreCode;
LotsOfCode;
end else OtherCode;
if Condition then DoFirstAlternative else DoSecondAlternative;
if FirstCondition then DoFirstAlternative
else if SecondCondition then DoSecondAlternative
else DoThirdAlternative;
// Individualist
if (SomeVariable = Condition) // I always put the condition alone
then PerformSomeRoutine;
if (SomeVariable = Condition) // Condition X found
and (SomeOtherExpression) // Requirement Y fulfulled
then begin // we can do our stuff
Code;
MoreCode;
LotsOfCode;
end
else OtherCode; // or we ended up with the alternative
if Condition
then DoFirstAlternative // The Condition compelled us to do this
else DoSecondAlternative; // Optionally, we explain the alternative
// Here I find myself doing many different approaches,
// depending on the complexity of the logic, and the number
// of alternatives, but with multiple nested if's, I tend to indent
if FirstCondition
then DoFirstAlternative // We are doing 1 because ...
else if SecondCondition
then DoSecondAlternative // We are doing 2 because ...
else DoThirdAlternative; // Otherwise, We are doing 3
// I might chose this approach if it is a long chain
if FirstCondition
then DoFirstAlternative // We are doing 1 because ...
else if SecondCondition
then DoSecondAlternative // We are doing 2 because ...
else DoThirdAlternative; // Otherwise, We are doing 3
C++/C# and Java people likes to rant about how ugly and verbose our begin / end's are, and I am sure I could rile myself up over their curly brace enclosures and some of the religion connected to those too - but I am not going to go there. However, there are a few things about our enclosures that may be worth thinking over.
Some like to dangle their end's at various indentation levels. I prefer to align it with the matching start of the block (do begin, then begin). Why? It makes it is easier to identify the code path in relation to the condition.
More enclosures doesn't always mean better readability. Here is an example from Indy9(IdTunnelCommon.pas). Kudzu, I love you - but I disagree with your code style in this file :)
(I know I am going to get flamed for this...^^)
procedure TReceiver.SetData(const Value: string);This little nugget very much bear the signs of being written for debugging, so I am not going to hold it to Kudzu for style (much). It includes a few nice examples of code that can be simplified, so it is all good.
var
CRC16: Word;
begin
Locker.Enter;
try
try
fsData := Value;
fiMsgLen := Length(fsData);
if fiMsgLen > 0 then begin
Move(fsData[1], (pBuffer + fiPrenosLen)^, fiMsgLen);
fiPrenosLen := fiPrenosLen + fiMsgLen;
if (fiPrenosLen >= HeaderLen) then begin
// copy the header
Move(pBuffer^, Header, HeaderLen);
TypeDetected := True;
// do we have enough data for the entire message
if Header.MsgLen <= fiPrenosLen then begin
MsgLen := Header.MsgLen - HeaderLen;
Move((pBuffer+HeaderLen)^, Msg^, MsgLen);
// Calculate the crc code
CRC16 := CRC16Calculator.HashValue(Msg^);
if CRC16 <> Header.CRC16 then begin
fCRCFailed := True;
end
else begin
fCRCFailed := False;
end;
fbNewMessage := True;
end
else begin
fbNewMessage := False;
end;
end
else begin
TypeDetected := False;
end;
end
else begin
fbNewMessage := False;
TypeDetected := False;
end;
except
raise;
end;
finally
Locker.Leave;
end;
end;
So what do I do? • I assume the except block was for debugging, but I'm taking it out. • I might be changing the behaviour by setting two default values up top, but at least there is no doubt about their default value. • But what is with the conditional assignments? Please! BooleanVariable := Expression; !! Don't go "then true else false" on me! • Reworked the comments.
Personally, I think it is more readable like this.
procedure TReceiver.SetData(const Value: string);Now, where is that flameproof tin foil dress...
var
CRC16: Word;
begin
fbNewMessage := False;
TypeDetected := False;
Locker.Enter;
try
fsData := Value;
fiMsgLen := Length(fsData);
if fiMsgLen > 0
then begin // Data found, Check for Type
Move(fsData[1], (pBuffer + fiPrenosLen)^, fiMsgLen);
fiPrenosLen := fiPrenosLen + fiMsgLen;
TypeDetected := (fiPrenosLen >= HeaderLen);
if TypeDetected
then begin // We have a type, check header for content
Move(pBuffer^, Header, HeaderLen);
fbNewMessage := Header.MsgLen <= fiPrenosLen;
if fbNewMessage
then begin // we have enough data for the entire message
MsgLen := Header.MsgLen - HeaderLen;
Move((pBuffer+HeaderLen)^, Msg^, MsgLen);
CRC16 := CRC16Calculator.HashValue(Msg^); // Calculate the crc code
fCRCFailed := CRC16 <> Header.CRC16; // and check if it was correct
end;
end;
end;
finally
Locker.Leave;
end;
end;
Next: I'll be rambling a little more on effective comments, naming and structure.
Edit - added after Paul's Snippet Rewritten: Delphifreak likes to exit early. He also likes to initialize up top, and I totally agree - that is a good practice. Exit works well in this example where all the conditions are dependant on the previous one. IMO, things get a lot more hairy if there are multiple conditions with alternatives (if/then/else if). For this example, Exits are not bad.
procedure TReceiver.SetData(const Value: string);
var
CRC16: Word;
begin
fbNewMessage := False;
TypeDetected := False;
Locker.Enter;
try
fsData := Value;
fiMsgLen := Length(fsData);
if fiMsgLen = 0
then Exit;
// Data found, Check for Type
Move(fsData[1], (pBuffer + fiPrenosLen)^, fiMsgLen);
fiPrenosLen := fiPrenosLen + fiMsgLen;
TypeDetected := (fiPrenosLen >= HeaderLen);
if not TypeDetected
then Exit;
// We have a type, check header for content
Move(pBuffer^, Header, HeaderLen);
fbNewMessage := Header.MsgLen <= fiPrenosLen;
if not fbNewMessage
then Exit;
// we have enough data for the entire message
MsgLen := Header.MsgLen - HeaderLen;
Move((pBuffer+HeaderLen)^, Msg^, MsgLen);
CRC16 := CRC16Calculator.HashValue(Msg^); // Calculate the crc code
fCRCFailed := CRC16 <> Header.CRC16; // and check if it was correct
finally
Locker.Leave;
end;
end;
Tuesday, August 5, 2008
Why Free Software has poor usability, and how to improve it
A very interesting read found over at Matthew Paul Thomas's blog.
In 15 points, he puts the finger on some potential issues with free software. Since I am in the process of trying to make some, I found many of the points to be familiar, and sometimes even too close for comfort. His essay contain some important reminders and well-developed ideas for improving the process of designing and developing software intended for public consumption.
Here is the list of points:
Permalink to Matthew Paul Thomas's article: http://mpt.net.nz/archive/2008/08/01/free-software-usability
Spotted at SlashDot.
In 15 points, he puts the finger on some potential issues with free software. Since I am in the process of trying to make some, I found many of the points to be familiar, and sometimes even too close for comfort. His essay contain some important reminders and well-developed ideas for improving the process of designing and developing software intended for public consumption.
Here is the list of points:
• Weak incentives for usability
• Few good designers
• Design suggestions often aren’t invited or welcomed
• Usability is hard to measure
• Coding before design
• Too many cooks
• Chasing tail-lights
• Scratching their own itch
• Leaving little things broken
• Placating people with options
• Fifteen pixels of fame
• Design is high-bandwidth, the Net is low-bandwidth
• Release early, release often, get stuck
• Mediocrity through modularity
• Gated development communities
• Few good designers
• Design suggestions often aren’t invited or welcomed
• Usability is hard to measure
• Coding before design
• Too many cooks
• Chasing tail-lights
• Scratching their own itch
• Leaving little things broken
• Placating people with options
• Fifteen pixels of fame
• Design is high-bandwidth, the Net is low-bandwidth
• Release early, release often, get stuck
• Mediocrity through modularity
• Gated development communities
Permalink to Matthew Paul Thomas's article: http://mpt.net.nz/archive/2008/08/01/free-software-usability
Spotted at SlashDot.
Sunday, August 3, 2008
Understanding RESTful services
RESTful web services seem to be gaining in popularity over SOAP / RPC oriented web services, but what is a REST (Representational State Transfer) service, really?
The term was coined by Roy Fielding in his Ph.D. dissertation in 2000, so it is not really a new thing. Some people like to wave the term around like a new mysterious tool that only the chosen ones can understand and wield. Some authors like to dance around the topic without getting down to the elevator speech description and go into great and elaborate examples on how "RPC is too complex in this or that scenario, and this is why you should use REST instead". Cut to the chase already!
One book that do actually get to the point, is RESTful Web Services by Leonard Richardson & Sam Ruby (O'Reilly, ISBN 0-596-23926-0). Easy read, good examples, sensible commentary.
So... what is REST ?
Key: The URI is our permanent and unique key to the content
Content: Can be anything, really, as long as it can be sent across the transport
Transport: HTTP (Get, Put, Update, Delete)
Transaction: Always one complete operation per REST object
Ok, so that might be a tad oversimplified, but it actually isn't much more complicated than that. In a way, you can compare it to a file system, and depending on how you build your content, it can be cached like a file system, in your ISPs cache, the company firewall cache, and your desktop browser cache.
Was that too easy? Well, there are some areas that are not quite as well defined, such as content description, content discovery, and access control - but there are more than one way to Rome here. Eventually, some sort of best practice will probably emerge, but right now it is sort of every RESTful service for itself.
IMO, for a developer - the true challenge with REST is deciding on how to construct your data URIs and how to partition up your data so that they are easily and logically accessible. You have to consider such stuff as categories, filtering, time range, wildcards, etc. If you intend to cache the data, you also need to figure out versioning so that you don't get stuck with stale data. On the coding side - the real action happens behind the http transport on your server, where you have to do your stuff to pack and deliver or unpack and store your data.
It turns out that REST is not even a bit mysterious. Some people just like to pretend that it is. You don't need huge commercial complex frameworks to write a REST service. Come to think of it, Delphi is a great tool to write REST servers in. It has all you need: good parsing tools, database integration, http components, etc. </soapbox>
The term was coined by Roy Fielding in his Ph.D. dissertation in 2000, so it is not really a new thing. Some people like to wave the term around like a new mysterious tool that only the chosen ones can understand and wield. Some authors like to dance around the topic without getting down to the elevator speech description and go into great and elaborate examples on how "RPC is too complex in this or that scenario, and this is why you should use REST instead". Cut to the chase already!
One book that do actually get to the point, is RESTful Web Services by Leonard Richardson & Sam Ruby (O'Reilly, ISBN 0-596-23926-0). Easy read, good examples, sensible commentary.
So... what is REST ?
Key: The URI is our permanent and unique key to the content
Content: Can be anything, really, as long as it can be sent across the transport
Transport: HTTP (Get, Put, Update, Delete)
Transaction: Always one complete operation per REST object
Ok, so that might be a tad oversimplified, but it actually isn't much more complicated than that. In a way, you can compare it to a file system, and depending on how you build your content, it can be cached like a file system, in your ISPs cache, the company firewall cache, and your desktop browser cache.
Was that too easy? Well, there are some areas that are not quite as well defined, such as content description, content discovery, and access control - but there are more than one way to Rome here. Eventually, some sort of best practice will probably emerge, but right now it is sort of every RESTful service for itself.
IMO, for a developer - the true challenge with REST is deciding on how to construct your data URIs and how to partition up your data so that they are easily and logically accessible. You have to consider such stuff as categories, filtering, time range, wildcards, etc. If you intend to cache the data, you also need to figure out versioning so that you don't get stuck with stale data. On the coding side - the real action happens behind the http transport on your server, where you have to do your stuff to pack and deliver or unpack and store your data.
It turns out that REST is not even a bit mysterious. Some people just like to pretend that it is. You don't need huge commercial complex frameworks to write a REST service. Come to think of it, Delphi is a great tool to write REST servers in. It has all you need: good parsing tools, database integration, http components, etc. </soapbox>
Saturday, August 2, 2008
Reusable Grid View - part 4 - Code Complete
The first part of FDCLib (Free Delphi Code Library) is out.
It contains the reusable grid view controller, a small color utility unit and three very simple demos.
The code is currently only tested under Delphi 2007, and the color utility unit implements a record with methods, so I guess that breaks older versions, but it is easy to change. I'll probably add some conditional code for compatibility later.
Details about versions downloading can always be found at http://fdclib.fosdal.com. You can download a .zip file or grab it with SVN from the repository at SourceForge. I haven't created a download package on SourceForge yet, but that is coming as well.
The Grid Demos
There are three very simple demos included in the demo/DemoFDCLib project. I'd like to add some more later, but I think they demonstrate the basic functionality for now.
• DemoFrameGridViewController contains the interactive bits (ie the TStringGrid, etc.)
• DemoNumbersViewClass show a series of numbers and their value squared, and also show how to use a custom static column color.
• DemoColorsViewClass shows how to implement a procedural color and contains $FFFFFF rows :) Loading time is nonexistant.
• DemoDirectoryClass is a small utility class that populate a TStringList from a directory path with wildcards. DemoDirectoryViewClass implements the grid controller for showing content from that extended string list. The default directory in the demo is "%temp%\*", but feel free to experiment with changing it in the edit box.
It contains the reusable grid view controller, a small color utility unit and three very simple demos.
The code is currently only tested under Delphi 2007, and the color utility unit implements a record with methods, so I guess that breaks older versions, but it is easy to change. I'll probably add some conditional code for compatibility later.
Details about versions downloading can always be found at http://fdclib.fosdal.com. You can download a .zip file or grab it with SVN from the repository at SourceForge. I haven't created a download package on SourceForge yet, but that is coming as well.
The Grid Demos
There are three very simple demos included in the demo/DemoFDCLib project. I'd like to add some more later, but I think they demonstrate the basic functionality for now.
• DemoFrameGridViewController contains the interactive bits (ie the TStringGrid, etc.)
• DemoNumbersViewClass show a series of numbers and their value squared, and also show how to use a custom static column color.
• DemoColorsViewClass shows how to implement a procedural color and contains $FFFFFF rows :) Loading time is nonexistant.
• DemoDirectoryClass is a small utility class that populate a TStringList from a directory path with wildcards. DemoDirectoryViewClass implements the grid controller for showing content from that extended string list. The default directory in the demo is "%temp%\*", but feel free to experiment with changing it in the edit box.
Anonymous Methods - When to use them?
As many others, I am still trying to wrap my head around this. All the hubbub about anon.methods, but lack of direct descriptions of actual use, lead us to take a "Emperor's New Clothes" kind of view on them.
I believe there are some areas where anon.methods will have a significant impact. Not dramatic or revolutionary - but significant.
That said, we do really need something solid which demonstrate the advantages of anon.methods beyond the trivial one-liner demo methods (which easily can be done in the "old ways"). Until we see some, here is some speculation and conjecture from my side...
1. Lambda expressions and Generics
I assume that for generic classes, the compiler generates something to the effect of anon.methods behind the scene. I also assume that anon.methods are one of the building blocks in type inference.
2. Isolation Glue
When you write classes that interact in the traditional OOP way, you often have to derive new classes from the base classes and implement a set of features (read: method override) where you apply your knowledge of the two new classes to create new interaction rules. Very often, such override methods are just a handful of lines. Yet you have to add yet another two classes to achieve it. In concept, this is a similar problem domain to generics - but here we could be talking about trying to make two specific classes (such as a visual and a non-visual) work together without having to reimplement all the plumbing in great explicitness.
Anon.methods can simplify this by allowing one of the classes to implement all the glue using anon.methods instead of having to implement overridden virtual methods in both classes.
3. Threads
Threads today are a fairly elaborate construction project. Anon.methods may enable us to create something similar to fibers. Where the old style single kernel fibers had to deal with manual scheduling on one CPU, today's multi-core aware fiber management code can delegate the "packaged" anon.methods and data (or fibers if you like) to multiple kernels for actual concurrency.
In what way will this differ from threads? Threads are elaborate to design, requiring yet another class descendant to implement the Execute method. You have to manually feed them their data/scope, and manually retrieve any generated content, and manually implement a "tie it all together" sync.point (ie where you are waiting for all threads to complete so that you can continue). They are expensive to set up, kick off and there is a lot of housekeeping involved.
From the top of my head, here are some forms of processing can be compartmentalized using anon.methods without the overhead of multiple thread class implementations: Sorting (key generation/comparison), matrix math / SSD type processing like compression/decompression, and other types of algoritmic code. In theory, any sequential processing that don't have backwards or forwards dependencies in the dataset, would be trivial to parallelize.
The result should be less obstacles to writing code that actually can utilize all your hardware.
• Is it possible to do this with the existing TThread? Yes it is.
• Can anon.methods reduce the complexity of doing it? Most likely, yes.
• Is it is still possible to do horrific mistakes? Yes, but they are the same old mistakes as for regular threads (race, starve, deadlock, data you can't/shouldn't touch, etc), and since you are now defining your thread/fiber in the scope of it's deployer - it may be that the compiler can make more intelligent judgement about the validity of your thread/fiber.
Will anon.methods lead to spaghetti?
I don't think so. Generally speaking, they may make some things clearer as more of the logic can be packed into one class, instead of being spread over multiple related classes. All "generic" code (pardon the pun) can be kept simple and ..ehm.. generic, and you don't have to build a huge inheritance tree where methods are virtualized up the wazzoo.
I think a possible factor in explaining why it is hard to come up with examples that are short, detailed and easily understood, is that we will benefit the most from anonymous methods in code that is anything but trivial.
P.S. In Bart Roozendaal's blog about anon.methods, Thomas Mueller mentions that the good old Turbo Pascal TCollection ForEach and FirstThat will be possible again. That's not a bad example either. I actually missed those a lot when I couldn't use them anymore.
Edit: Make sure to read Jarle Stabell's article on Lambda functions!
Edit 2:
• An older post from Barry Kelly on how how closures presents a challenge in Delphi for Win32. I guess we know by now that they landed on refcounting.
• The "Pascal gets Closures before Java" Reddit thread.
Edit 3:
• Wayne Niddery on Understanding Anonymous Methods
• Jolyon Smith on Anonymous Methods - When should they be used?
• Andreano Lanusse - Tiburon - Anonymous Methods
Edit 4: • Joel Spolsky - Can your programming language do this?
I believe there are some areas where anon.methods will have a significant impact. Not dramatic or revolutionary - but significant.
That said, we do really need something solid which demonstrate the advantages of anon.methods beyond the trivial one-liner demo methods (which easily can be done in the "old ways"). Until we see some, here is some speculation and conjecture from my side...
1. Lambda expressions and Generics
I assume that for generic classes, the compiler generates something to the effect of anon.methods behind the scene. I also assume that anon.methods are one of the building blocks in type inference.
2. Isolation Glue
When you write classes that interact in the traditional OOP way, you often have to derive new classes from the base classes and implement a set of features (read: method override) where you apply your knowledge of the two new classes to create new interaction rules. Very often, such override methods are just a handful of lines. Yet you have to add yet another two classes to achieve it. In concept, this is a similar problem domain to generics - but here we could be talking about trying to make two specific classes (such as a visual and a non-visual) work together without having to reimplement all the plumbing in great explicitness.
Anon.methods can simplify this by allowing one of the classes to implement all the glue using anon.methods instead of having to implement overridden virtual methods in both classes.
3. Threads
Threads today are a fairly elaborate construction project. Anon.methods may enable us to create something similar to fibers. Where the old style single kernel fibers had to deal with manual scheduling on one CPU, today's multi-core aware fiber management code can delegate the "packaged" anon.methods and data (or fibers if you like) to multiple kernels for actual concurrency.
In what way will this differ from threads? Threads are elaborate to design, requiring yet another class descendant to implement the Execute method. You have to manually feed them their data/scope, and manually retrieve any generated content, and manually implement a "tie it all together" sync.point (ie where you are waiting for all threads to complete so that you can continue). They are expensive to set up, kick off and there is a lot of housekeeping involved.
From the top of my head, here are some forms of processing can be compartmentalized using anon.methods without the overhead of multiple thread class implementations: Sorting (key generation/comparison), matrix math / SSD type processing like compression/decompression, and other types of algoritmic code. In theory, any sequential processing that don't have backwards or forwards dependencies in the dataset, would be trivial to parallelize.
The result should be less obstacles to writing code that actually can utilize all your hardware.
• Is it possible to do this with the existing TThread? Yes it is.
• Can anon.methods reduce the complexity of doing it? Most likely, yes.
• Is it is still possible to do horrific mistakes? Yes, but they are the same old mistakes as for regular threads (race, starve, deadlock, data you can't/shouldn't touch, etc), and since you are now defining your thread/fiber in the scope of it's deployer - it may be that the compiler can make more intelligent judgement about the validity of your thread/fiber.
Will anon.methods lead to spaghetti?
I don't think so. Generally speaking, they may make some things clearer as more of the logic can be packed into one class, instead of being spread over multiple related classes. All "generic" code (pardon the pun) can be kept simple and ..ehm.. generic, and you don't have to build a huge inheritance tree where methods are virtualized up the wazzoo.
I think a possible factor in explaining why it is hard to come up with examples that are short, detailed and easily understood, is that we will benefit the most from anonymous methods in code that is anything but trivial.
P.S. In Bart Roozendaal's blog about anon.methods, Thomas Mueller mentions that the good old Turbo Pascal TCollection ForEach and FirstThat will be possible again. That's not a bad example either. I actually missed those a lot when I couldn't use them anymore.
Edit: Make sure to read Jarle Stabell's article on Lambda functions!
Edit 2:
• An older post from Barry Kelly on how how closures presents a challenge in Delphi for Win32. I guess we know by now that they landed on refcounting.
• The "Pascal gets Closures before Java" Reddit thread.
Edit 3:
• Wayne Niddery on Understanding Anonymous Methods
• Jolyon Smith on Anonymous Methods - When should they be used?
• Andreano Lanusse - Tiburon - Anonymous Methods
Edit 4: • Joel Spolsky - Can your programming language do this?
Subscribe to:
Posts (Atom)
Programming Related Blogs
-
-
-
-
The Great Filter Comes For Us All2 days ago
-
-
IceFest in Pennsylvania4 years ago
-
-
Aaron Swartz11 years ago
-
Setup IIS for Episerver CMS8 years ago
-
-
Never Defragment an SSD13 years ago
-
Welcome to BlogEngine.NET6 years ago
-
-
Somasegar's blog - Site Home - MSDN Blogs8 years ago
-
-
-
The 2020 Developer Survey is now open!4 years ago
-
CodeSOD: On VVVacation16 hours ago
-