The Best Answer I Could Come Up With
Note that the only reason this requires C++ is for the operator signature syntactic sugar.
Update: Didn't need the explicit selfRV construction.
#include <assert.h> #include <stdio.h> struct selfRV { typedef struct selfRV (*selfSignature)(); selfRV(selfSignature ptr) : _ptr(ptr) { } operator selfSignature() const { return _ptr; } private: selfSignature _ptr; }; selfRV self() { return self; } int main() { puts(self == self()()()()()()() ? "works" : "doesn't work"); return 0; }
I bet this could be done easier with a fastdelegate...
http://www.codeproject.com/cpp/FastDelegate.asp
Still not sure why you'd want a function to return a pointer to itself, though, even with your python example (which is doing some really funky stuff, btw).
What's the fastdelegate solution? The real trick is that the return type of the self() function is recursively defined.
I'd do it in ASM, or in Javascript with a dynamic function. But not necessarily with a handholding language like C that tries to hide its variables. :)
Oh wait... ah I see now. You're trying to identify the fundamental "returning self" system? Ah OK, LOL. Good luck.
So does this have to do with AI and concepts of self identity, then?
That's not what I was thinking about, but maybe you could apply it there.
I was more interested in discussing the expressiveness of a language, especially the type system.
How on Earth did you come to the conclusion that C is "handholding"? O_O
This is, without the least bit of hyperbole, one of the most ridiculous claims I have ever heard. It's not [HTML_REMOVED]the[HTML_REMOVED] most ridiculous, but it's certainly top ten material.
[HTML_REMOVED] A better example. Imagine implementing a state machine that looks like this: [HTML_REMOVED]
[HTML_REMOVED] def state1(input): if isOdd(input): return state1 else: return state2
def state2(input): if isEven(input): return state1 else: return state2 [HTML_REMOVED]
[HTML_REMOVED] You'd run it like this: [HTML_REMOVED]
[HTML_REMOVED] state = state1 # initial state for i in inputs: state = state(i) [HTML_REMOVED]
[HTML_REMOVED] Solving the self problem is a prerequisite for this kind of structure. [HTML_REMOVED]
Because the variable addresses are managed by the compiler, of course. The user does not declare their addresses explicitly, a la ASM.
Right now I'm reading about some fundamentals of programming language theory, such as the lambda calculus, and it reminded me of this thread. Remember when I said that Iowa State's computer science department was so heavy into theory? Well it may have come off that I think theory is useless. That is certainly not the case. It's just that theory takes so long to migrate into tools used by large amounts of people.
The Python example above is taken almost directly from the definition of booleans and conditionals in the lambda calculus. It's not something you would use in production code, because there are so much better ways to do booleans in most programming languages, but it's certainly worth it to understand how it works and how those constructs are built from the lambda calculus. Why have a calculus? Because otherwise we tend to create overly complicated systems that are very hard to reason about. I mean, look at the current mechanisms for concurrent programming. There are major issues there. Concurrent programming is difficult to get right. Part of the paper I'm reading is talking about a calculus called the pi-calculus, which formalizes a model of concurrent programming. This theory is starting to find its way into some programming languages (polyphonic C# or Comega), which is a good thing. It will help us build provably correct concurrent systems.
In short, I just wanted to clarify my opinion on fundamental theory research. (I still think that Iowa State University could stand to make some money by researching topics that are a little bit closer to being used by the masses though.)
Why is there so much emphasis on proving things? I mean, if it looks like it works, and most of the time it works, doesn't that mean it works?
Besides, aren't all real world concepts and ideas impossible to concretely prove?
I looked up the pi calculus, but I'm not interested in it. I think assumption is a better road to better programming.
It's not April Fools anymore.
What? Is that an insult?
You can't really believe that "assumptions" are the stepping stone to larger software. Making assumptions is the antithesis of good software. That's why we have such things as asserts and provable algebrae.
Can't you imply some things that are commonly used in a certain way? Like, for example, replacing the meaningless "index" variable in a loop designed to cycle through a range of areas with a "built-in" index that is kept track of by the compiler?
Yeah, that's where you're codifying an idiom into the language proper. Always a good thing. That said, there are some things like formalized concurrent programming where having a mathematical foundation and provability of deadlock or livelock allow you to build larger systems and know they won't fail.
Proofs really are important. Algorithms like red black trees and quick sort, for example, are proven to do the right thing. (Any theory of algorithms class will teach you how to prove that code works. It's not always easy. :)) If they weren't proven to be correct, they wouldn't be used as much.
Those are good points. I guess, it's more difficult for me to accept proofs because my mind "proves" something for me. I agree though that people who's minds do not prove concepts automatically are more likely to use a proven method than an unproven one, and of course it's alway good to know that what you are doing is just what you are doing and not something else, too....