Array indexing in C++ behaves like pointer arithmetic. In addition to referencing element i of array a as a[i], C++ also allows you to reference the same element as i[a]. This is due because (like C) arrays act like pointers to blocks of memory, so a[i] = *(a + i) = *(i + a) = i[a].
The argument mode points to a string beginning with one of the following sequences (Additional characters may follow these sequences.): ``r'' Open text file for reading. The stream is positioned at the beginning of the file. ``r+'' Open for reading and writing. The stream is positioned at the beginning of the file. ``w'' Truncate file to zero length or create text file for writing. The stream is positioned at the beginning of the file. ``w+'' Open for reading and writing. The file is created if it does not exist, otherwise it is truncated. The stream is positioned at the beginning of the file. ``a'' Open for writing. The file is created if it does not exist. The stream is positioned at the end of the file. Subsequent writes to the file will always end up at the then current end of file, irrespective of any intervening fseek(3) or similar. ``a+'' Open for reading and writing. The file is created if it does not exist. The stream is positioned at the end of the file. Subse- quent writes to the file will always end up at the then current end of file, irrespective of any intervening fseek(3) or similar.
Set up try: Do something finally: Tear down
“Set up” could be opening a file and “Tear down” is closing the file. The try-finally construct guarantees that the “Tear down” part is always executed, even if the code that does the work doesn't finish.
If you do this a lot, it would be quite convenient if you could put the “Set up” and “Tear down” code in a library function, to make it easy to reuse. You can of course do something like
def controlled_execution(callback): set things up try: callback(thing) finally: tear things down def my_function(thing): do something controlled_execution(my_function)
But that’s a bit verbose, especially if you need to modify local variables. Another approach is to use a one-shot generator, and use the for-in statement to “wrap” the code:
def controlled_execution(): set things up try: yield thing finally: tear things down for thing in controlled_execution(): do something with thing
But yield isn't even allowed inside a try-finally in 2.4 and earlier. And while that could be fixed (and it has been fixed in 2.5), it’s still a bit weird to use a loop construct when you know that you only want to execute something once.
So after contemplating a number of alternatives, the python-dev team finally came up with a generalization of the latter, using an object instead of a generator to control the behavior of an external piece of code:
class controlled_execution: def __enter__(self): set things up return thing def __exit__(self, type, value, traceback): tear things down with controlled_execution() as thing: some code
Now, when the “with” statement is executed, Python evaluates the expression, calls the __enter__ method on the resulting value (which is called a “context guard”), and assigns whatever __enter__ returns to the variable given by as. Python will then execute the code body, and no matter what happens in that code, call the guard object’s __exit__ method.
As an extra bonus, the __exit__ method can look at the exception, if any, and suppress it or act on it as necessary. To suppress the exception, just return a true value. For example, the following __exit__ method swallows any TypeError, but lets all other exceptions through:
def __exit__(self, type, value, traceback): return isinstance(value, TypeError)
In Python 2.5, the file object has been equipped with __enter__ and__exit__ methods; the former simply returns the file object itself, and the latter closes the file:
>>> f = open("x.txt") >>> f, mode 'r' at 0x00AE82F0> >>> f.__enter__()
so to open a file, process its contents, and make sure to close it, you can simply do:
with open("x.txt") as f: data = f.read() do something with data