So far this blog experiment has worked out well, I've been writing regularly and getting a decent amount of words out in a fairly short amount of time.
I don't like the quality of the posts I've done so far though, so I'm going to start something new, where there is still a regular posting schedule, but I set aside enough time to make high quality posts.
I'll probably still post on here, but not on a regular schedule.
Wednesday, April 22, 2009
Wednesday, April 15, 2009
Open source
I've been using open source software on and off for quite a while, and it constantly amazes me how much there is out there and how well supported it is. That the huge package repositories of Debian and Gentoo actually get tested and sometimes maintain their own patch sets of different software packages seems like an incredible amount of work, and it's volunteer supported.
There are recurring complaints about open source, but in some ways its a measure of what people come to expect from it. Say 10 years ago, you had to muck around with a lot of stuff and eventually it worked and that was good enough, some of the good projects had a small dedicated group and you could get pretty good help from the forum or mailing list. These days it almost seems like every large project has an army of people doing everything from testing, to documenting and programming.
A major complaint used to be about the huge number of window managers available. A lot of the complaints were saying that there wasn't enough standardization, that these other projects were diluting the effort. But then GNOME and KDE got more mature, and those complaints died down. It seems like the number of window managers wasn't the real problem, just the lack of some nice defaults.
This is great, because I happen to really like this whole scene of experimentation that goes on because you can change the window manager, rip stuff out, add stuff in, and have different ways of organizing how you work with your computer. The other thing that is nice about this is that the X.org people think about how to introduce new features in a window manager friendly way. This slows development down, but just looking at the new things that are coming in to X.org such as indirect rendering, new architectures for drivers and MPX makes me think that there's only more good stuff to come.
There's a lot of back-end work going on that doesn't look too impressive, especially when your other software stops working for random reasons, but from a programmer's viewpoint, it looks clean and thoughtfully designed. I can't wait to see what's going to be built on top of it.
There are recurring complaints about open source, but in some ways its a measure of what people come to expect from it. Say 10 years ago, you had to muck around with a lot of stuff and eventually it worked and that was good enough, some of the good projects had a small dedicated group and you could get pretty good help from the forum or mailing list. These days it almost seems like every large project has an army of people doing everything from testing, to documenting and programming.
A major complaint used to be about the huge number of window managers available. A lot of the complaints were saying that there wasn't enough standardization, that these other projects were diluting the effort. But then GNOME and KDE got more mature, and those complaints died down. It seems like the number of window managers wasn't the real problem, just the lack of some nice defaults.
This is great, because I happen to really like this whole scene of experimentation that goes on because you can change the window manager, rip stuff out, add stuff in, and have different ways of organizing how you work with your computer. The other thing that is nice about this is that the X.org people think about how to introduce new features in a window manager friendly way. This slows development down, but just looking at the new things that are coming in to X.org such as indirect rendering, new architectures for drivers and MPX makes me think that there's only more good stuff to come.
There's a lot of back-end work going on that doesn't look too impressive, especially when your other software stops working for random reasons, but from a programmer's viewpoint, it looks clean and thoughtfully designed. I can't wait to see what's going to be built on top of it.
Wednesday, April 8, 2009
Memory Consistency Modelling
I made this presentation a while ago as a quick introduction to memory consistency models. I've lengthened it out a bit and made it my first attempt at online presentations.
You'll probably find I'm going at a very slow pace in this video, I tended to mess it up when I tried a quicker pace, this ended up being the best way to actually get through the whole thing.
Some things to note, I am talking very generally about the models, so I didn't use specific processors like SPARC or x86. In my research I mostly got to work with simpler theoretical models, so that might be a reason for my uncommon optimism about these methods.
Intro to Memory Consistency Modelling.
I don't think that the theory behind memory consistency models is too bad, and actually should be able to handle quite a bit. I'm confident that almost all the shared memory concurrency that we have now could be eventually handled by memory consistency model theory, if it keeps getting developed through research. I won't take a guess about future applications and how well it could be built into a method for constructing software though.
More references and details are available in my thesis.
You'll probably find I'm going at a very slow pace in this video, I tended to mess it up when I tried a quicker pace, this ended up being the best way to actually get through the whole thing.
Some things to note, I am talking very generally about the models, so I didn't use specific processors like SPARC or x86. In my research I mostly got to work with simpler theoretical models, so that might be a reason for my uncommon optimism about these methods.
Intro to Memory Consistency Modelling.
I don't think that the theory behind memory consistency models is too bad, and actually should be able to handle quite a bit. I'm confident that almost all the shared memory concurrency that we have now could be eventually handled by memory consistency model theory, if it keeps getting developed through research. I won't take a guess about future applications and how well it could be built into a method for constructing software though.
More references and details are available in my thesis.
Wednesday, April 1, 2009
FRP Design Challenge
To start off with, the purpose of this post is to find out what ideas are out there and maybe start some new ones. I'm not looking to get people to work on things that I've thought up, more interested to see what they think about them.
I think programmers need to start using the word design more than art. This really captures the balance between usefulness of a piece of software and how nice it is to work with and even just look at. Haskell and other functional languages seem to have a huge potential for software design. So that should be the key, abstractions that combine mathematical beauty, with practical purpose and are just nice to work with. Abstractions that are only about mathematical beauty are more art than design.
I've been quite interested in the functional reactive programming since these projects seem to be working towards a nice solution for programming both user interfaces and games. One thing I've noticed is that the emphasis is on local definition of what components do, which seemed to be hinted at in this lambda the ultimate post.
This probably works well for user interfaces, but for games, it would be really nice to be able to mix: 1. global rules, such as physics, that are best computed by considering the whole world at once, and 2. local rules, specially defining actions for certain types of objects in the world. The challenge is to find a way to mix these two ways of defining how objects act, in a way that is nice to work with in a functional language.
The challenge proceeds in stages:
1. First implement a global rule for objects, let's say circles, in the world, using the Barnes-Hut gravity simulation algorithm. The only purpose in this stage is to get some objects orbiting around each other.
2. Now get individual objects to react to the global algorithm. Have each circle change color based on the acceleration/velocity/change in either.
3. Allow this reaction to be customized, some circles turn blue, some circles turn red ...etc.
4. Get individual objects to react to each other. Add a simple collision response, it could also just be a color change, does not need to consider physics.
5. Allow this collision response reaction to be customized.
6. Add flocking behavior to objects, now objects are aware of their environment.
7. Allow the flocking behavior to be customized.
This is a design challenge rather than a strict programming challenge, I know that this can be done, but what are some nice abstractions that would suit this problem? Would they generalize to other situations, if I changed out the physics engine for a user interface layout algorithm?
I would like to know where the likely problems would be in implementing these challenge stages in various frameworks, what things I could read that would help me solve it myself, or suggestions in changing the challenge itself.
I think programmers need to start using the word design more than art. This really captures the balance between usefulness of a piece of software and how nice it is to work with and even just look at. Haskell and other functional languages seem to have a huge potential for software design. So that should be the key, abstractions that combine mathematical beauty, with practical purpose and are just nice to work with. Abstractions that are only about mathematical beauty are more art than design.
I've been quite interested in the functional reactive programming since these projects seem to be working towards a nice solution for programming both user interfaces and games. One thing I've noticed is that the emphasis is on local definition of what components do, which seemed to be hinted at in this lambda the ultimate post.
This probably works well for user interfaces, but for games, it would be really nice to be able to mix: 1. global rules, such as physics, that are best computed by considering the whole world at once, and 2. local rules, specially defining actions for certain types of objects in the world. The challenge is to find a way to mix these two ways of defining how objects act, in a way that is nice to work with in a functional language.
The challenge proceeds in stages:
1. First implement a global rule for objects, let's say circles, in the world, using the Barnes-Hut gravity simulation algorithm. The only purpose in this stage is to get some objects orbiting around each other.
2. Now get individual objects to react to the global algorithm. Have each circle change color based on the acceleration/velocity/change in either.
3. Allow this reaction to be customized, some circles turn blue, some circles turn red ...etc.
4. Get individual objects to react to each other. Add a simple collision response, it could also just be a color change, does not need to consider physics.
5. Allow this collision response reaction to be customized.
6. Add flocking behavior to objects, now objects are aware of their environment.
7. Allow the flocking behavior to be customized.
This is a design challenge rather than a strict programming challenge, I know that this can be done, but what are some nice abstractions that would suit this problem? Would they generalize to other situations, if I changed out the physics engine for a user interface layout algorithm?
I would like to know where the likely problems would be in implementing these challenge stages in various frameworks, what things I could read that would help me solve it myself, or suggestions in changing the challenge itself.
Subscribe to:
Posts (Atom)