Keeping on schedule is tough, but I want to keep to this Wednesday thing. I want to talk about some concepts that seem to be important to concurrency (more specifically, shared memory concurrency) but don't seem to be distinguished too often.
The first concept is stopping, a process can stop for a number of reasons. One thing to think about is what happens when a process stops. In lock based systems, it usually means that a lock won't get released, so a process can't continue running. So one of thing that is nice to do is to ensure that the system can keep running even if a process stops. Also, in some systems, such as a database, it is also good to be able to arbitrarily stop a process, perhaps to be able to undo a deadlock.
Once a process is stopped for some reason, in the middle of some action, what do we do now? In some cases we can just ignore the process, but usually this is not desirable. If we could complete the process' action then we should be ok. One way to do this is to use descriptors for actions, which would give a full description of what was to be done. Then it is possible for another process to complete the stopped process' action. In many lock-free algorithms, this completion is attempted immediately, to fulfill the lock-free progress condition. But helping is expensive. I prefer to think that the descriptor now gives you the option of helping, and helping is safe, but we should manage this helping to increase efficiency. Descriptors work for many actions, like as data structure operations such as add and delete. For a whole transaction, this information may be a lot harder to store, it has to include most, or all the power of a programming language.
There is another option, if we can't ignore the action and can't complete it, maybe we can reverse it and make it like it didn't happen. Reversing can be easier than completing, there are many ways to reverse, taking snapshots, recording actions performed and more. This is a good option for transactional systems, since full information cannot be known about the transaction, and even if it is, you know that reversing will be able to complete, but you don't know if the transaction will halt.
Stopping, completing and reversing are important concepts. It would be nice if there was a richer theory behind it. Reversing of operations is used in work by Herlihy and others in Transactional Boosting, in order to be able to use lock-free data structures with transactional memory, we need to be able to reverse things that happened to the data structure. These data structures require different augmentations to handle reversing. In some cases the ability to reverse or complete may depend on the current state of the system.
Two other aspects that are also important are idempotence and commutativity. These also can depend on the current state of the system.
I think a solid theory of stopping, completing, reversing, idempotence and commutativity operations on data structures would go a long way to making it easier to work with lock-free data structures.
Wednesday, March 25, 2009
Wednesday, March 18, 2009
A Code Editor with Powerful Layout
Text editors have often been a favourite hobby hacking project. There's a lot of innovation going on in user interface, extensibility, and networking. One thing I haven't seen a lot of is a text/code editor with a powerful layout engine.
By powerful I mean that it has support for nested layout structures, and accurate placing of elements. A few editors have powerful enough editing widgets that allow pictures and interactive buttons. Emacs allows a fair bit of this and the plt-scheme environment allows embedding richer data into code. Unfortunately, it doesn't seem like it is possible to place elements fairly accurately.
I want a framework to play around with layout ideas for code. We're still stuck with colors and indentation as the only way to emphasize various pieces of code. The design world has huge collections of techniques and strategies for indicating hierarchies, important pieces, and the relationship between elements. Also, so much time is spent looking through and reading code that anything that makes it nicer would probably quite appreciated.
Surprisingly, I actually started on this one already, though it'll be a while before I get a working version out. I've decided that for my project, the TeX way of layout will bring out a good balance of power and efficiency. A good introduction the TeX layout is "A Functional Description of TeX's Formula Layout" by Heckmann and Wilhelm. TeX uses aligned boxes, which are nested, to setup its layout. This means that the underlying data structure doesn't support rotating elements, which I think is reasonable to expect. You do however, gain an incredibly flexible system, that let's you position elements very precisely, with the nesting making some tasks a lot simpler.
Ok, this is still very vague sounding, but the way I hope that this works is that you have some code, which is roughly parsed to get the parts that are important for layout.
The results of the parse are then sent to the layout engine, which is custom made for the parser. It takes the parse tree, and produces a layout in box format, which is then drawn on the screen, using a graphics library like Cairo. The first version would probably be output only. It would only do layout once, without worrying about local updates in between.
This first version would already be quite useful though, once it is up, you can start to create different layout engines, and try out different ideas. The main thing I want to play around with first is the way hierarchies are displayed. Indentation is nice, but in most editors is sort of limited. With a box format, we can play around with laying out blocks of code side by side, changing the indent distance based on the context, maybe even some light borders in the right places.
By powerful I mean that it has support for nested layout structures, and accurate placing of elements. A few editors have powerful enough editing widgets that allow pictures and interactive buttons. Emacs allows a fair bit of this and the plt-scheme environment allows embedding richer data into code. Unfortunately, it doesn't seem like it is possible to place elements fairly accurately.
I want a framework to play around with layout ideas for code. We're still stuck with colors and indentation as the only way to emphasize various pieces of code. The design world has huge collections of techniques and strategies for indicating hierarchies, important pieces, and the relationship between elements. Also, so much time is spent looking through and reading code that anything that makes it nicer would probably quite appreciated.
Surprisingly, I actually started on this one already, though it'll be a while before I get a working version out. I've decided that for my project, the TeX way of layout will bring out a good balance of power and efficiency. A good introduction the TeX layout is "A Functional Description of TeX's Formula Layout" by Heckmann and Wilhelm. TeX uses aligned boxes, which are nested, to setup its layout. This means that the underlying data structure doesn't support rotating elements, which I think is reasonable to expect. You do however, gain an incredibly flexible system, that let's you position elements very precisely, with the nesting making some tasks a lot simpler.
Ok, this is still very vague sounding, but the way I hope that this works is that you have some code, which is roughly parsed to get the parts that are important for layout.
The results of the parse are then sent to the layout engine, which is custom made for the parser. It takes the parse tree, and produces a layout in box format, which is then drawn on the screen, using a graphics library like Cairo. The first version would probably be output only. It would only do layout once, without worrying about local updates in between.
This first version would already be quite useful though, once it is up, you can start to create different layout engines, and try out different ideas. The main thing I want to play around with first is the way hierarchies are displayed. Indentation is nice, but in most editors is sort of limited. With a box format, we can play around with laying out blocks of code side by side, changing the indent distance based on the context, maybe even some light borders in the right places.
Wednesday, March 11, 2009
Formal Methods and Me
My research isn't strictly in formal methods, or formal verification as it's sometimes called, but at some point I really found that I liked being a little extra careful with proofs. It started when I stumbled on Leslie Lamport's "How to Write a Proof", which talks about a way of structuring proofs in a way that encourages you to explain all the small steps. It is similar to the style of proofs introduced by David Hilbert. "How to Write a Proof" has several interesting stories about writing a proof, then trying to rewrite them in a structured proof format, and finding an error in the original proof as a consequence. As a programmer, it seemed like this was a great way to reduce errors in proofs.
So I tried it out, in my Analysis class. I took a proof from the book and created a structured version. It wasn't too bad since the proof was quite careful, but at one point I wondered "why did they use an open set here?" since the intuitive description of the proof didn't seem to need it. After a bit more thinking, I figured out that it was because they had to use a theorem described earlier in terms of open sets. And I don't think it mattered too much in that specific case, but at that moment I felt like I had a deeper understanding of what was going on, that I might have just skipped over once I got the intuition.
So then I got to thinking that in a program, a difference like this, between a function that does and doesn't use an open set, is an off-by-one error that often causes things to muck up. For more complicated programs, shouldn't we have tools that allow us to do this kind of reasoning in an automated way? This started my interest in formal methods, which at that point was mainly me trying to make structured proofs in my math courses. You would figure that mathematicians and programmers alike would love to find any way to reduce errors, but the resistance comes quite quickly.
There are horror stories of multipage proofs that 1+1=2, I think this one is actually true, but I don't know where it comes from. Mathematicians, I found, really like reading proofs as puzzles in themselves, this was also noted in "How to Write a Proof". Structuring the proof spoils the flowing prose, and detailing out the small steps in between is kind of like explaining a joke. Programmers have a similar feeling, though their concerns are usually over how much work is involved, or they will use the argument that "no proof or model can totally capture a system".
These concerns are somewhat justified, but for me, a nicely structured proof is just as nice as a nicely structured program, where the pieces fit together like parts of a puzzle. I think that is one good way to frame my view, some people might look at a jigsaw puzzle and say "ah, well there's a leaves here and there's the rest of the rest of the trunk, so these pieces must fit together", and indeed it is certainly true, but to me it's even better when you can actually see the pieces fit together.
So I tried it out, in my Analysis class. I took a proof from the book and created a structured version. It wasn't too bad since the proof was quite careful, but at one point I wondered "why did they use an open set here?" since the intuitive description of the proof didn't seem to need it. After a bit more thinking, I figured out that it was because they had to use a theorem described earlier in terms of open sets. And I don't think it mattered too much in that specific case, but at that moment I felt like I had a deeper understanding of what was going on, that I might have just skipped over once I got the intuition.
So then I got to thinking that in a program, a difference like this, between a function that does and doesn't use an open set, is an off-by-one error that often causes things to muck up. For more complicated programs, shouldn't we have tools that allow us to do this kind of reasoning in an automated way? This started my interest in formal methods, which at that point was mainly me trying to make structured proofs in my math courses. You would figure that mathematicians and programmers alike would love to find any way to reduce errors, but the resistance comes quite quickly.
There are horror stories of multipage proofs that 1+1=2, I think this one is actually true, but I don't know where it comes from. Mathematicians, I found, really like reading proofs as puzzles in themselves, this was also noted in "How to Write a Proof". Structuring the proof spoils the flowing prose, and detailing out the small steps in between is kind of like explaining a joke. Programmers have a similar feeling, though their concerns are usually over how much work is involved, or they will use the argument that "no proof or model can totally capture a system".
These concerns are somewhat justified, but for me, a nicely structured proof is just as nice as a nicely structured program, where the pieces fit together like parts of a puzzle. I think that is one good way to frame my view, some people might look at a jigsaw puzzle and say "ah, well there's a leaves here and there's the rest of the rest of the trunk, so these pieces must fit together", and indeed it is certainly true, but to me it's even better when you can actually see the pieces fit together.
Wednesday, March 4, 2009
Computer Science - What's that?
I'm going to try to stick to posting every Wednesday. This post needs a lot more work, but my main goal is just to get a regular schedule going.
A lot of writing has been devoted to just explaining what computer science is or isn't. Some people even want to get rid of the term as a whole.
Here's my view on what computer science is, for a regular person who is curious. Computer science is a really broad field these days, but to really understand the core, it's good to start at complexity theory. It's not the most accurate use of the term, but I am talking about an area that deals specifically with problems and how they are solved by computers.
Why is this fun? I think that question isn't asked enough. Usually because serious science and fun aren't commonly associated. Puzzle solving is fun, and puzzle solving with computers is a new puzzle. Solving crosswords, jigsaw puzzles, and sudoku puzzles are pretty fun. After you start getting good, you develop a few rules that help you solve the puzzles faster.
Now, try to think of how you would get a computer to solve these puzzles for you. It spoils the fun of the original puzzle, but now you have a whole new set of tools to work with. Just like a sudoku puzzle, once you get good at solving it, you want to solve it faster. With a computer program, you have to count how many operations it is doing and try to squeeze that down. Even trying to count the operations is a puzzle in itself.
Consider this puzzle, you have a box with a row of switches, and a lightbulb. The game is to flip the switches to that the lightbulb turns on. Every time you add a switch, the number of combinations to try doubles. The boring way to play the game is to try all the combinations one at a time.
To make it more interesting, we can now see inside the box. The inside of the box is made up of the three basic pieces: AND, OR, and NOT. It's easier to describe these than a sudoku puzzle! Now that we can see that the box is made up of, is there a way to use that information to turn the lightbulb on in a way that is significantly faster than trying all the combinations? This puzzle is part of a whole class of puzzles that are called NP-complete, which most researchers suspect have no efficient solution, but have no way to show it.
A lot of writing has been devoted to just explaining what computer science is or isn't. Some people even want to get rid of the term as a whole.
Here's my view on what computer science is, for a regular person who is curious. Computer science is a really broad field these days, but to really understand the core, it's good to start at complexity theory. It's not the most accurate use of the term, but I am talking about an area that deals specifically with problems and how they are solved by computers.
Why is this fun? I think that question isn't asked enough. Usually because serious science and fun aren't commonly associated. Puzzle solving is fun, and puzzle solving with computers is a new puzzle. Solving crosswords, jigsaw puzzles, and sudoku puzzles are pretty fun. After you start getting good, you develop a few rules that help you solve the puzzles faster.
Now, try to think of how you would get a computer to solve these puzzles for you. It spoils the fun of the original puzzle, but now you have a whole new set of tools to work with. Just like a sudoku puzzle, once you get good at solving it, you want to solve it faster. With a computer program, you have to count how many operations it is doing and try to squeeze that down. Even trying to count the operations is a puzzle in itself.
Consider this puzzle, you have a box with a row of switches, and a lightbulb. The game is to flip the switches to that the lightbulb turns on. Every time you add a switch, the number of combinations to try doubles. The boring way to play the game is to try all the combinations one at a time.
To make it more interesting, we can now see inside the box. The inside of the box is made up of the three basic pieces: AND, OR, and NOT. It's easier to describe these than a sudoku puzzle! Now that we can see that the box is made up of, is there a way to use that information to turn the lightbulb on in a way that is significantly faster than trying all the combinations? This puzzle is part of a whole class of puzzles that are called NP-complete, which most researchers suspect have no efficient solution, but have no way to show it.
Subscribe to:
Posts (Atom)