Thursday, 15 December 2011

Reading an integer from a string

This is a topic barely breached in common howtos, and is a point for confusion for many. It seems to be a great secret only implemented buried deep within thousands of lines of code, and even there not very modularly, with support for not many bases besides the standard decimal, octal, binary and hexadecimal.

The algorithm


The algorithm is simple, but one I have tried to implement without success on several occasions and only today did I write a function to successfully and flexibly read an integral value from a string of characters.
   The function I wrote takes two arguments: an integer named 'base', obviously to notate the base of the integer to read, and a pointer to a c-style string of characters names 'context'.
   In the scope, the function initiates two variables: Value, defined as 0, and Offset. The function then sets Offset as 0 and enters a loop. Upon each iteration, it is checked whether the value pointed to by context iterated by Offset is not zero. if it is, the loop breaks. In the loop, the first thing performed is a comparison between the value pointed by context offset by Offset and a character between '0' and '9' (Or 48 to 57, in ASCII notation). If it is, the most important part of the algorithm is performed: Iterating Value.
   Value is assigned as itself multiplied by base plus context[Offset].
The pseudocode might look like

Value = Value * base + context[Offset];

Of course, this is notated in an arbitrary order of operations compliant semi-colon terminated language.
   The next part of the algorithm is an else if: if the current character was not between '0' and '9', i is compared to the ranges of 'A'-'Z' or 'a' 'z'. If it is, another operation is performed. It is checked whether the current character + 10 is between the base and 10; if it is not, an example case of this may be a hexadecimal number with the letter 'g' within it. The number parser is in no place to throw any form of exception, so the loop simply breaks, taking with it the value that has been accumulated so far. Is this comparison is true, the function moves on to the operation: Value is assigned as Value multiplied by base (as before), plus the current character, casted to an integer by a comparison between itself and 'Z'; if this is true, the character is uppercase, and therefore in the range 48-90 in ASCII notationand 48 is taken off the value, to reveal its offset from 0; if this is false, it is assumed the character is greater than 90 in ASCII notation,  and therefor an uppercase character in the range of 97 - 122 in ASCII notation; so 90 is taken off the value, to reveal its offset from 0. 10 is then added to the value given, and finally this is the value added to Value * base. The pseudo code may be:

Value = base * Value + (((context[Offset] <= 'Z') ? context[Offset] - 'A' : context[Offset] - 'a') + 10);


And there, any range of bases can be used, despite the Alpha vaues used in said base's notation, and despite cases. The next thing performed is an else case, where the loop simply breaks. After this, the function merely returns the value accumulated.

The code I used

Any avid programmer may be already implementing this as they read, but I felt it was necessary to include my own implementation, and keep in mind this is code in C++, and is merely my own take on the algorithm.


int readInt(int base, char* context){
    if (!context)
    return 0;


    int Offset;
    int Value = 0;


    for (Offset = 0; context[Offset]; Offset++){
        if (Char_Numeric(context[Offset]))
        Value = base * Value + (context[Offset] - '0');


        else if (Char_Alpha(context[Offset])){
            if ((((context[Offset] < 'Z') ? context[Offset] - 'A' : context[Offset] - 'a') + 10) > base - 10)
            break;
            Value = base * Value + (((context[Offset] <= 'Z') ? context[Offset] - 'A' : context[Offset] - 'a') + 10);
        }
        else break;
    }

    return Value;
}

Thursday, 8 December 2011

pthreads and concurrency in general

The very moment I implemented pthreads in a program I was writing, (An isometric RTS - I was inspired at the time by Age Of Empires, my childhood defining game) a huge drop in stability and overall reliability happened within said source. I've read about the problems with concurrency; I've read about race conditions, deadlocks - all that, but it was still a nasty awakening to the state of concurrency in computing.
   Errors began to pop up, and It seemed a gradual process, but It was definitely happening: My code was turning into a mess. I had a thread for the very engine of the game: the AI of the working villagers who provide food and all that good stuff; I had a thread for the graphics engine; a loop would be entered where a pointer would go through a linked list of graphics elements in a graphics queue and draw them all the the screen (SDL is a great library). I had a thread for handling events sent from X11 to SDL, and finally to me, which would call the function pointed to by a function pointer within a class 'C_Clickable' which held the information about the whereabouts of any clickable graphics element, among other things, and I had a main thread; a mainly idle, but necessary thread; this thread entered at call time, initialized everything and basically oversaw the workings of the other threads, and managed them.
   A complicated series of instructions by any view point, but it worked nicely. Buttons would be clicked and it would be handled with a pleasing response time. It seemed to work. But as in human nature, I ignored the glaringly obvious and sinister problem: I was receiving errors from xcb and X11 when runtime ended. Of course, I passed this off as a complaint about my rebellious use of concurrency, and I expected it was merely something caused by the crude use of thread meeting and the destruction of resources by SDL which seemed nearly unhandled. This wasn't the case.
   The time came to write a map generation algorithm, to give my class hierarchy a place to manifest its graphical representation. After a frustrating session of figuring out the hardly intricate mathematics of drawing an isometric tile in GIMP (A lost art; I figured this out a long time ago but had since forgotten) I began to implement a map drawing algorithm; I was leaving the creation of complex structures such as hills and forests for later, instead opting for a grass plane, so I could piece together the mechanics of the gameplay without spending so much time on something so complex.
   A for loop containing a for loop incrementing values through a 2D array of tiles and coordinates through multiples of 16 and 32 was my choice, and possibly the most efficient. Compilation was an arduous ordeal of fixing issues caused by g++'s inability to generate unknown type errors after parsing all classes, but that didn't bother me. I fixed them and complied, tail between my legs, to the wishes of g++. Once compiled, errors hid under every crevice of concurrency. A BadShmGet complaint was made by X11, and an unknown request in queue assertion by xcb failed, but consistent it was not . Sometimes the window would close instantly upon depression of the lonely 'Begin' button that inhabited my underpopulated and half baked - at this point, anyway - pre-game menu and the complaint given by X11, then runtime would cease. Sometimes xcb would crash after clicking said button, causing runtime to be cut short - but there were often discrepancies beyonnd even these two pairings of behaviors.
   Sometimes X11 would drop my video memory and wait for an event - such as a mouse move event, as was often the case - to send an unknown event to xcb causing a crash, and I would be presented with a blank, function-less window, which would implode upon mere movement of the mouse. Sometimes it would outright segfault, which led me to believe my algorithms were faulty, which I expected. I optimized and tweaked them into a nice state, and crossed my fingers I had overstepped no array and nullified no pointer before its time. The state of reliability in my app remained, with the exception of a 'double free or corruption' error (Whose fruit is an intimidating core dump into stdout) at one point.
   It seemed the libraries I was using simply didn't like threads. But obviously this wasn't the case. SDL even has a wrapper library for pthreads - which is optional, or so I have heard. X11 has an XInitThreads function - which I called - to provide nice support for shared state memory. It may have been my implementation, or it might have been the libraries, but any person who can convince me this isn't a sorry state of affairs either has low standards but a good mouth for persuasion, or is a better programmer than I.

Simply approaching concurrency leads to complications. There are mutex locks whose purpose is to fix the problems involved in shared state memory - which they don't. In fact they cause problems on their own; deadlocks. A messy solution to a complicated problem, yet companies such as Intel still push threads as though they will save they world, when the opposite is more likely. Problems such as mine, with the world's growing love for multi-core architecture, will become more common with further adoption of threads: Highly indeterminate (As in determinism) programs where errors hide and are not consistent, therefore all the harder to debug. As the nigh on cliched expression goes, Threads are evil. They are a complicated, messy solution to a complicated problem. But complaining can only get us so far. I propose a solution.
   Concurrent languages are making their presence known, but none seems to be a definite solution to the problem of concurrency. Several concepts of mine come to mine spring to mind when I ponder this problem.
   Concepts such as autonomous mutexes. The very idea of mutual exclusion is one that should be simple; disallowing threads access to variables is a fix-all end-all solution to shared state memory; with this I agree. Deadlocks and thread race situations are both caused by shared state memory, but one is caused by the solution. The answer is simple, and it's not Concurrency APIs for serial languages; fully concurrent programming languages are the answer. Interpreters have much, much greater control over concurrency during runtime than would ever be possible when a language is compiled; even something as daring as autonomous mutual exclusion is possible with an interpreter. It would takes as much as 5 bytes of memory and and if statement to fix this problem: bool Locked; Thread* Owner; within a class would act as a safegaurd when used properly by an interpreter. But an important issue still stands.
   Say an interpreter for a concurrent language runs, and every time a variable's value is modified, the Locked bool in the variable's class instance is set as true, then reverted to false as operations cease. Say another thread is trying to access said variable to write to it's address in memory. I'll lay it out like this, to try and explain.

Thread 1                                                        Thread 2
                                                                        Is variable is locked? false.
Function checks if variable is locked: false.
                                                                        Locked = true;
Locked = true;
Function writes to variable believing it has lock
                                                                       Function writes to variable
                                              Thread race occurs


Again, the very concept of concurrency in computing catches us out. An interpreter written in a serial language with a concurrent API interpreting a concurrent language uses thread scheduling to ultimately defeat the purpose of interpreting said concurrent language. It seems to me, at the time of this writing, the very logic involved in concurrency will never be solved to the point of the reliability of serial algorithms.
   "What about checking for ownership - not just locking - before writing to a variable?" You may ask. Thread scheduling is ambiguous, and a thread assuming it is the one who has lock is perfectly reasonable; it will have lock most of the time, it is only on rare occasions the kernel schedules the threads in such a way that this situation occurs.
  "Well, what about writing an interpreter for a concurrent language in a concurrent language, Mr. Mcclure?" You may be asking. I'll give you a minute to think about that paradox. No concurrent language exists that doesn't rely on, at some point, concurrent APIs for serial languages. It seems that computing, at its very core, is serial, and the only was for it to function perfectly is serial methodology.
   Is there a future for concurrency in computing? Yes. Is it coming soon? No. I have every intention to save the world from threads, but the only way to do so would be to implement threads. The paradox of bootstrap loading was figured out, so possibly someday this will be too, but I feel as though letting the theory simmer in my brain as I do with most concepts - to the point of definition and maturation - will not be sufficient.
   A complex series of algorithms forming in my mind as I type this could hold the key to safely locking shared state memory, but It will take a lot of effort, research, and arduous testing to first implement it, then see if the results are satisfactory, despite the fact, and constantly oppressing fear I hold of said fact, that a thread race could happen at any possible given moment, and there is no perfect way to know if one may or may not happen.

Wednesday, 16 November 2011

The Word 'Optimize'

This word is plaguing tech blogs everywhere I see; People passing it around like a hot potato, without even knowing what it means. If I must quote the person that set me off writing this, "You have a valid point. Google needs to get their act together and optimize android the same way ios is optimize."  This is a prime example, ignoring the grammatical failure, from a poster who will remain Anonymous. He (Or she) claims (The context was a discussion on whether there is a point in putting > 1gb of RAM in an Android device, by the way) that Android isn't 'Optimized' like iOS. If he had been more ambiguous he would've said metal is for emos. Which of course it isn't. . . mostly. Good metal isn't, anyway. Back on track, the ambiguity of this word plays a great deal into the overall effect of making its user look foolish in the wrong circumstance
   'Optimize' is a word of many interpretations: The -O{1-3} flag in the GNU Compiler Collection (gcc and g++,anyway) applies automated 'Optimization' to the output file involving many procedures, such as loop unrolling. I am wary of using this flag myself as I haven't the slightest idea what it does aside from loop unrolling (Although, it is just one Google away), but it's never rick-roll'd me when I've used it and it's recommended by anyone you might ask, so I guess it's okay in my books.
   Another use of the word is may be just generally going through code looking for bottlenecks, unnecessary memory usage, garbage (Referenceless memory), unnecessary system calls - just basically things that slow a process down, and changing them for the better. Often, a lot of little things like this may bog down something severely, but in any normal circumstances, things such as this, except for serious bottlenecks, have very, very little effect upon a program; nothing like the difference between iOS and Android. (Don't get me wrong; I think Android is great, and open-source truly is one of the greatest concepts-put-into-practice ever, but Android (The interface, at least) runs on a virtual machine for gods sake, and the iPhone 4S still runs on 512MB of RAM as well as it does)
   The difference between variations of techniques may be night and day (shared memory versus memory-mapped-filed or pipes, for example, where the former is much faster) and it is in the choice of techniques that lies the true effect of 'optimization'. Things such as switching a concurrent application from using fork() to pthreads will remove the strain that lies in both the forking process, and the replication of the memory stack, and therefore optimize the application greatly.
   Another point has arisen: Hardware 'optimization'. Basically, this is about using, to the full extent, the abilities of any hardware component. Of course, the speed of drivers reading and writing from the buses don't vary much; it's more dependent on the hardware itself. So one way to 'optimize' hardware interaction (in most cases, discussions that involve both 'optimization' and hardware are about GPUs) would be to refine the API used to interact with the driver - but this is more about the maturing of software (A natural process for any half-decent project) and the development of new technologies than anything else.

The main reason this word gets to me so is that the people are using it to get an illogical - or, at least terribly thought out - point across. First of all, a developer that doesn't think of performance, and therefore actively chose or seek out alternatives that improve performance (Python, anyone?) should not be called a developer; the term 'code monkey' may be more fitting in this instance - hell, I even read some guy claiming programmers have barely caught on to quad-core CPUs after 5 years, and that Android, to accommodate for a quad core CPU (Tegra 3, anyone?) would have to go under another phase of 'optimization' to even make use of the new CPUs (And then developers would slowly 'optimize' for quad core) when there's exactly no difference between developing for single, dual, quad - or perhaps even hex or oct core, for if you use any form of concurrency, or even if you don't, any process (or thread) will be assigned a core regardless of the number of cores and given time slots by the kernel whenever deemed fit.

Monday, 31 October 2011

'They Called Me A God'

Here is a conceptual wall of text just waiting for consumption by the masses.


At first, there was nothing; at first, I did nothing. There was nothing to to. Before long, doing nothing became tiresome, so I began to ponder. But, of course, pondering had its limits, for there was nothing to ponder about. So when pondering became monotonous, I began to explore. This came with its own set of benefits and disadvantages; no matter how long I searched, I found nothing, and it seemed there truly was nothing. I searched for a long time, longer than I had cared to ponder, although, my perception of time was skewed at best: There was no way to measure time, nor was there certainty that time itself existed.
When I became disinterested in searching, I soon came to the realization that despite the fact that there was nothing, but there didn't have to be. I had found two things to do already, was it not possible for me to create something to do? I searched, but with a purpose this time: I searched for a means of creation. I must have searched for a long time, but of course this passed me by. With eyes for my new target, I began to notice things. I noticed tiny clouds, barely distinguishable from the possibly infinite black surrounding them.
These clouds were the most interesting thing I had ever seen, as they were not nothing, therefore they were something. I observed them for a long time: I now measured time by their sporadic movements. This was not the best, nor the most accurate way to tell time, but now time was sure to exist, and I could now see events, their causes, their effects, and their length; the cloud moving as a whole to its unspoken and possibly undetermined destination seemed to be a much slower process than, say, small parts of the cloud breaking away and rejoining. There soon seemed to be an abundance of clouds, each of them made up of millions upon millions of tiny particles, I had observed, when there used to be merely dozens. This gave me my most radical idea yet: If these clouds became from nothing, could I not create from nothing as well, or perhaps use the clouds to create? My urge to make something out of the infinite nothing began to grow to the point of passion.
With an even newer and more ambitious target, again I searched on. In some places, the concentration of clouds seemed more dense than elsewhere, and sometimes, multiple clouds had joined to make a denser, larger cloud – These, I kept my attention on. I noticed something else whilst exploring – something else entirely. Where the clouds were large and many, there seemed to be a pressure. A pressure that was surely not coming from the clouds, and which seemed to always come down to a pinnacle at a certain point in space.
It was at these points I observed the particles that made up the clouds slowly appearing, seemingly from nowhere. Perhaps there was another realm like this one overlapping, and particles were coming through the weakest points of the barrier that kept the entities of the two realms ignorant of each other – or perhaps the entities in the other realm know of my existence but I not theirs? If this was true, then where did the material that makes up the other realm come from? Perhaps there was no other realm, for if there was, would it not be a process such as this that formed the contents of said realm? If this was true, there could be infinite realms, each accidentally – or maybe purposefully – forming the next? I took a moment then to admire how complex and inquisitive my pondering had become. Whether or not it was possible, probable or impossible other realms existed, these points of material generation needed more examination than merely theoretical.
I began to interact with the particles coming from these points. I caught them and held them, and when they were many, I compressed them together. Many, many particles compressed together seemed to make something very, very different from the clouds, something that was not part of the infinite black, but something palpable. Interested, I waited. I waited until particles beyond counting – for it would take as long as the clouds do to reach their destination – then I pushed them into each other, making a very, very dense, solid object. This was far more dense than many clouds that had joined, and far more solid than anything that had ever existed. This was a material that was special – it needed a name. After much pondering, I named it Rock, for only a sound so dense and solid like the Rock itself could describe such a material.
After I had spent such a time collecting tiny particles, the Rock continued my work. By itself, it started to pull, a tiny force at first, other particles into itself. It was soon surrounded by its own cloud, getting denser as it moved towards to the rock. The cloud grew and so did the rock, attracting clouds from further and further as it grew. Was a Rock the final destination of the clouds, or were they merely misguided ghosts, the only force moving them the momentum from the interactions of the particles that make them up and from their creation?
The Rock grew and grew, with me transfixed in pondering and observation. It grew to a strange shape both long and wide, but some parts bigger than others, and when a single part was denser or larger it attracted more particles. Offended, I carefully evened it out into a perfect, round shape, even on all sides, so that it could grow into a healthy, uniform shape and not be burdened by parts of itself pulling the wrong amount of particles in the wrong direction. I watched and pondered, and the Rock grew and pulled.

Sunday, 30 October 2011

The Theory/Concepts behind Tenga

If you have read my post below about Python, you may have picked up the name Tenga. Tenga is an interpreted language of my design and implementation. Tenga is based on a few ideas:

1. Interpreted Languages are the future.

Some languages survive decades and come out on top, some languages seem timeless. But programming itself moves and grows, and this I forsee: soon gone are the days of lengthy compile-cycles; gone are the days of recompiling simply to debug; gone are the days of worrying if a language falls short in the field of performance: with VM based and interpreted languages growing ever more popular, and with hardware expanding into the impossible, an interpreter is notperceived as as much of a performance hit as it once was, if at all. Features that can be implemented with an interpreter can go far beyond anything possible in a compiler: features such as the tcallback keyword, which declares a function with a condition so that that function can be called by the Tenga interpreter is met. This is typically used as a debugging function to call if the main function returns with a nonzero status. Eg:

int main(int argc, char- args-)
        if argc is < 2, return -1.
        
        else, Parse_Input(-argc, -args).
        return 0.

tcallback int Debug_Menu()
        because `main` is not 0,//Subject to change
        write "An error occurred. Are you sure you entered the arguments?\n".
        return 0.

And the backticks operator: This returns the last returned value of any function. No arguments can be given; it takes a function name, and returns the last returned value of said function regardless of arguments. Eg (Beware: Pseudocode):


int Parse_Input()
        if `Scan_Input` is not 0 or `Lex_Input` is not 0,
                Send_Error(),
                return -1.
        //Et cetera


Etc.

2. Programming is messy.

There's no escaping that fact. Languages like Python do try their best to clean it up, but it's a futile effort. Clean, readable code is the goal of all programmers, or should be, at least, and languages like Python bring this goal closer than ever, and what ever kind of slippery slope this involves, this is positive. My very own effort takes a different approach. Python aims, or at least succeeds to as a side-effect, to remove most, if not all clutter from the screen, such as parentheses and semicolons (At the cost of newline flexibility and mandatory indentation (proper indentation is very, very good practice, but again, this is a hit to flexibility, and rules out some IDEs that may use 5 spaces for indentation instead of the standard 8 spaces/tab)) whereas Tenga aims to organise ideas in such a way that has never been widely used: Englishlike sentences/statements, which brings us to the enxt point:

3. Commas and 'Periods' help organize code

Even after using sentence based syntax for a short while during writing the sample source code with which to test the early versions of the Tenga interpreter, I have already noticed a massive difference in the I write code with Tenga. It is another thing to think about, but I'm sure it will come more than natural after a small adjustment time. Commas and ...Periods... help organize code into nice little chunks in the form of sentences, which are statements gruoped together with commas and terminated by a *period.
Eg:

int main()
        int a, b, c,
        char d, e, f.
        
        d = read().

        if d is 'a',
                write "Aw cool bro\n".


        return 0.


See how the declarations are separated from everything else from both a blank line and a period? It doesn't look all that impressive, and it barely affects the readability, but the effect it has with writing code is great.

*I still don't like using that word.


Tenga will involve features and operators that are either standard, new, or raised from the dead. Many arts  have been lost, and comma/period termination died with Algol. I intend to raise it from the dead, and I intend to keep improving Tenga until I'm dead and gone.

Python is not a good first programming language.

Every day I see Python reccomended as a first programming language, and I can't fully express how wrong I think this in mere words.

People reccomend Python claiming its simplicity will give you a good kickstart into programming, but this is the exact approach that will turn programming from a sought after skill involving discipline and dedication into a sit-down-and-screw-around type trivial activity. Books proclaiming to teach you x language in y days are bad enough, but a language with such simple language so that you can 'focus on just the logistics' is the exact opposite of what a beginner programmer needs, and here's why:

1.  Programming is messy.

There's no escaping that fact. There are lacks of backwards compatability that obsolete entire projects, there are compilers that will throw an error about missing parentheses or a misplaced semicolon in a header file, and proclaim the error is in the file including it, there are programs so large even their sole developers gets lost within them - what I'm trying to say is that programming is not clean nor simple, never has been, never should be and probably never will be. To begin programming by learning Python will, in fact help you focus on just the logistics; the flow, the structure, all those good things, and that is beneficial, with that I am okay. But learning to program in Python bears unforseen consequences. I have read of several cases of people either starting in Python or switching to it (From Perl, of course) who have gotten lost in Java or C code, and blamed their inherited inability to both read and write code in a messier language on their fixation with Python. But is it Python's fault? Python is a good language. But for someone ignorant to the theoratical side of things (I almost used the word messy again), is has a numbing effect. Programmers get so wrapped up in their comfortable, warm Python beds, sucking their thumbs and humming sweetly out of satisfaction with their perceived understanding of everything that they fail to notice their bed slowly taking the shape of a coffin, and their ears are deaf to the sound of the hammer. They are hidden from the nasty monsters outside such as the Lambda calculus, Assembly, compilers; everything a programmer needs to know inside ad out to truly be considered a good programmer. Now, don't get me wrong; of course, I'm not saying every Python developer is sealing their own fate of mediocracy. I'll explain further later.

2. Newline termination
Note: This section is deprecated. Pratt parsing is probably used by the interpreter, therefore making newline 'termination' not a drawback, merely less messy. Forcive indentation, however, is  more of an issue.

Okay, this section is more of a critisism towards the Python interpreter.
I've always been beyond curios when it comes to implementing a compiler/interpreter, and after a good playaround, and a semi-thorough study of ther Python interpreter, I have found a few things that tickled my pondering bone. Why do people implement newline termination? C++ is terminated by semicolons; Cobol is terminated by full stops (No, I'm not american, so I will not call full stops 'periods'), Lisp is fully parenthesized, and Tenga, my under developement (up to version 0.001 Alpha Mc-Does-not-work-yet pre-release 0) baby of an interpreted language is terminated by an interesting mix of commas and full stops similar to that found in Algol 68. Usually, compilers/interpreters discard newline characters at the scanning (sometimes, lexing) stage, and terminate statements at the given character, but Python terminates at newline characters. Why is this? Of course, Python has no termination character (Unless you count '\n') so newline is a feasable indication to the interpreter that a statement is over, but a massive gain in flexibility would be had with the implementation of argument satisfation termination. THis may be a new concept, it may not be, but heres an example anyway. (Beware the pseudocode)

for (int x = 0; x < y; x++)

As we can see, 'for' takes one argument when the parenthesized block is considered as a whole, and the poarenthesized segment takes three arguments, essientially. Here, have some pseudoEBNF notation.

forloop := "for" ([statement]; [comparison]; [statement])["{" statement {statement} "}"]

As you can see, the initialisation, condition and incrementation are all optional, so maybe this isn't the best example, but nonetheless, a good interpreter would not complain if this fit the specification for the syntax in the language, and neither would the Pythion interpreter, that is, if Python used syntax like this. But what is the user wanted to split it up over multiple lines?

for
(int x = 0;
x < y; x++){ //code }


Okay, this is messy (There it is again) and I know of no programmer who would do this with a for loop with such arguments for for, but sometimes things such as linked lists make doing this kind of typographical acrobatics necessary (a Rootnode->next->next->governor = Rootnode->next->next->next->governor; kind of deal), and newline termination would cause an interpreter or compiler complain about this. If, instead, argument satisfaction was implemented, the contents of the parentheses would evaluate to a corrent abundance of arguments, the for loop would be syntactically correct and the interpreter would move on, but of course, this is not the case.

And last, but not least:

3. Python is used by the wrong people.

Beginning programming on a slippery slope into language dependance and newline terminated statements is definately the wrong approach, by all standards. What would I reccomend, you ask? Assembly. "Assembly to start off? What is this guy smoking?" You may say, but bare with me. My advice is to go as low level as possible, as Python is an extremely high level language. Starting with Assembly (Okay, it might be a bit of a stretch. Any language between (and including) c++ and Assembly would be low level enough to give you a sense of direction when trudging through semicolons.) and moving upwards, (learning Lisp on the way; every programmer should learn a little Lisp) reading about theory along the way, of course, will give you a full understanding of programming itself, instead of just the ability to read and write the simplest of programming languages. If I must allude to one of the greatest essays of all time, Teach yourself programming in 10 years, a minimum of 10,000 hours must be dedicated to something to truly become good at something; I think that it is then, with a profound understanding, a programmer should indulge in Python. Now a programmer understands and can benifit from understanding, and use his knowledge of algorithms to use Python to his full advantage. The right person to use Python is an experienced programmer, not a new programmer.