خداحافظ، آموزش شی گرایی خودرو گسترش می یابد
I propose a new rule for discussions of object-oriented programming: > **Anyone who brings up examples of Dog, Bike, Car, Person, > or other real-world objects, unless they are talking about writing a > clone of The Sims or something, is immediately shot**.
I feel that this new policy will improve the quality of the discourse enormously. [Henning Koch wrote] about his encounter with dependency injection: > Aside from having snorted coke through my nose over “nearly > tautologic diagrams” I feel the need to defend Martin Fowler’s > [article][Fowler] because it had such a profound effect on me when > it was published. Although I had been playing with “objects” and > “classes” before, this article finally made me understand what OOP > was all about. This is not true for many other articles and yes, I’m > looking at you, shitty `Car extends Vehicle` OOP tutorial. [Henning Koch wrote]: http://www.netalive.org/swsu/archives/2005/10/in_defense_of_the_nearly_tautologic_diagram_1.html (Henning Koch, posted on blog “Software will Save Us”, 2005-10-13, “In defense of the nearly tautologic diagram”) [Fowler]: http://martinfowler.com/articles/injection.html (Inversion of Control Containers and the Dependency Injection pattern, by Martin Fowler, 2004-01-23) Why `Car extends Vehicle` or `Duck extends Bird` are terrible examples. ----------------------------------------------------------------------- The `Car extends Vehicle` or `Duck extends Bird` type of tutorial obscures more than it illuminates. In good OO programming, we don’t make class hierarchies in order to satisfy our inner Linnaeus. We make class hierarchies in order to simplify the code by allowing different parts of it to be changed independently of each other, and to eliminate duplication (which comes to the same thing). Without any context as to what the code needs to accomplish, you can’t make a judgment about whether a particular design decision is good or bad. The problem with examples like `Duck extends Bird` is that it gives you no understanding of the kind of considerations you need to think about in order to decide whether the design decisions discussed above are good or bad. In fact, it actively sabotages that understanding. You can’t add code to ducks. You can’t refactor ducks. Ducks don’t implement protocols. You can’t create a new species in order to separate some concerns (e.g. file I/O and word splitting). You can’t fake the ability to turn a duck into a penguin by moving its duckness into an animal of some other species that can be replaced at runtime. You can’t indirect the creation of ducks through a factory that produces birds of several species, and even if you could, the analogy doesn’t help at all in understanding why the analogous thing might be a good idea in an actual program. Penguins don’t implement the “fly” method that can be found in birds. Whether you consider ducks to be birds or simply chordates does not affect the internal complexity of ducks. And you don’t go around causing things to fly without knowing what kind of bird they are. (Ducks themselves decide when they want to fly, and they certainly seem to know they’re ducks and not vultures.) So, although some people claim that such analogies “make it easier to grok what polymorphism is about”, I disagree. It’s misleading; it obscures the relevant while confusing people with the irrelevant. A simple interactive graphical environment is a better alternative. ------------------------------------------------------------------- Here’s an example that I think would be better to use instead: the `Visible` hierarchy in [Pygmusic], which is a kind of software drum machine. A `Timer` is a horizontal strip on the screen with a stripe racing across it. A `NumericHalo` is a spreading ripple on the screen that fades. A `Sound` is a thing on the screen that makes a sound when a `Timer`’s stripe hits it. A `Trash` is a thing that deletes `Sound`s when you drop them on it. They all inherit from `Visible`, which represents things that can be drawn on the screen and perhaps respond to mouse clicks or drop events, but they do different things in those three cases. In addition, the `Trash` and `Sound`s are subclasses of `ImageDisplay`, because the way they handle being drawn is simply to display a static image, so that code is factored into a superclass. [Pygmusic]: http://www.canonical.org/~kragen/sw/pygmusic/ I don’t think that code is exemplary. You could argue that it’s reinventing the wheel, and even if it weren’t, surely you could improve the design. It’s not the simplest possible example: it contains a four-level inheritance hierarchy and three separate protocols. The formatting needs more vertical whitespace. Some of the methods are badly named. But I think it’s good enough to show what inheritance is actually good for, how you can use it to factor out common parts of your code into a superclass; and that’s a damn sight better than the vague nonsense about ducks and ellipses. It also has the advantage of being a lot more concrete than ducks and Chevrolets, on one hand, and derivatives and polynomials, on the other. All the objects in the Visible hierarchy paint themselves every frame. And it provides several different kinds of interaction. `Visible` objects dynamically dispatch messages having to do with drawing, being clicked, being dragged, having things dropped on them, and making sounds. `draw()`, `handle_click()`, `play()`, and `is_drop_target_for()` are all polymorphic. One disadvantage is that it has to deal with a fair amount of arithmetic; there’s lots of `/ float(n)` and `* self.rect.w + 0.5` and `self.size**2/2 * (1 - (1 - age*2)**2)` and the like, which I think reinforce a common misconception about computer programming: that you need to learn algebra and arithmetic in order to write programs, and that programs mostly deal with numbers. In fact, *graphical* and *sound* program engines have to deal with numbers a lot, but most programs don’t. Another disadvantage is that in itself, it’s probably too big. It’s around ten pages of code, which is far too much to expect a new student to inhale in one gulp. In practice, you should start with a much smaller piece of code, with one single polymorphic method in two classes, and work incrementally from there. What would the smaller example look like? It would probably still be graphical. Graphics are nice for demoability and are also a natural fit for my favorite pattern, Composite, which implies at least one polymorphic method. Object-orientation lets you have: - more than one object implementing the same protocol; - more than one method defined in the same protocol; - more than one protocol implemented in the same object; - methods and instance variables defined in different classes in the ancestry of an object. An ideal example would show all four of these kinds of variation. And it lets you factor orthogonal axes of variation into different objects. For example, a graphical object might have: - different shapes; - different positions; - different animation paths; - different behaviors when clicked; - different colors; - different kinds of canvas things are drawn on. PyGame or `<canvas>` offers a useful fundamental kind of canvas; SVG offers another one. A synthetic canvas that handles windowing, rotation, scaling, or even non-affine transforms might be a nice simple example of different kinds of canvas. (Clipping a polygon to a window might be an undesirable complication for an example, though. Maybe if you were just clipping a line.) You can get an IFS fairly simply if you have some kind of primitive drawable (say, a Point), some kind of CompositeDrawable, and a way to make a transformed version of a CompositeDrawable be a part of itself. Then you just need a way to make the recursion bottom out, say by seeing if the bounding box has gone to zero. What would the next step look like? Maybe include more exotic OO features: - classes as first-class objects; - message forwarding (e.g. a message logging proxy). -- To unsubscribe: http://lists.canonical.org/mailman/listinfo/kragen-tol