Programming thread

  • 🏰 The Fediverse is up. If you know, you know.
  • Want to keep track of this thread?
    Accounts can bookmark posts, watch threads for updates, and jump back to where you stopped reading.
    Create account
imo the main use case for goto is breaking through multiple layers of nested loops, for everything else there's better options
for everything else there's better options
for everything else
I feel like there are way better options for this too. e.g. functions and early returns. If nesting becomes a problem, your function is probably doing way more than it should and you would likely benefit from breaking it up into smaller functions.

Basically, when things get complex, programmers often reach for "return" or "break" or "continue", not because they want to return a value or explicitly need a conditional loop, but because they need "goto" but have been told "goto" is bad (or, worse, their language doesn't have it). This is the same sort of abstraction inversion that gives us things like RAII-instead-of-finally-blocks in C++ or object-with-no-fields-and-a-single-method-instead-of-function-pointer in Java. It's niggerlicious and I cannot take any language that refuses to have gotos (or an explicit equivalent such as tail calls) seriously.
On the contrary, I think your examples are more niggerlicious.

C-like:
loop_start:
foo = input();
if (len(foo) == 0) goto loop_end;
/* ... do something ... */
goto loop_start
loop_end:
Why? Seriously. Just why?

Which is functionally the same thing, except letting the language create the labels and the goto at the bottom for you, at the cost of the language also creating a conditional branch that always fails (will typically be optimized out, but the cognitive overhead and eyesore remains).
How is this not a more cognitively overloading eyesore than a built-in for or while loop?

C-like:
/* Attempt to acquire using method 1 */
if (succeeded) goto use;
/* Attempt to acquire using method 2 */
if (succeeded) goto use;
/* Attempt to acquire using method 3, and so on */
if (succeeded) goto use;
/* ... */
use:
/* Complex use of acquired thing */
Why not do
C-like:
methodOne();
if (success)
    // Return initialized shit here

methodTwo();
if (success)
    // Return initialized shit here

// ...
  
return null;
and have a descriptively named function with the single purpose of initializing/acquiring whatever shit is needed? Why sperg about stack frames and turn your code into spaghetti to avoid them?

Of course, the truly skilled at coping with an insistence on avoiding gotos will factor out all the "acquire" parts into a separate procedure so that they can use "return" to skip over all the remaining methods on success.
>Of course, the truly skilled at being big stupid poopyface morons with an insistence on being retarded will do [SANE THING].

Some of the worst code ever written was the result of someone trying to be overly clever in how they implemented it. Readability is often heavily sacrificed.
As a rule of thumb, don't be clever; be explicit.
 
Last edited:
Why? Seriously. Just why?
I believe I already explained why: it accomplishes what you cannot do by putting the actual test directly inside the condition part of a "while" statement. It is true that most programmers will find the "while(1)" form far more familiar, and therefore easier to read. It also avoids the necessity of choosing names. I am not advocating for goto maximalism. I am saying that there are things that are needlessly difficult without them, and it is absurd for a general-purpose programming language to omit them.
and have a descriptively named function with the single purpose of initializing/acquiring whatever shit is needed? Why sperg about stack frames and turn your code into spaghetti to avoid them?
Because this is just goto with extra steps. If you have a good reason to factor out the code, factor it out. If it's because you need a label to jump to but are scared of four letters, that's stupid. If the code in question needs to access and/or modify variables in the enclosing scope, you'll likely need to complicate the interface to your new procedure to the point that it will significantly increase the confusion of anyone trying to read it. Now any control flow involving the containing procedure needs to be serialized into some sort of (documented) return value, then deserialized back into control flow.

And, of course, you have to think of a name, and that's just as much a point against procedures as it is against labels. Even greater, in fact, since most people only create global procedures, and they are typically lexically farther-removed from their point of reference.
>Of course, the truly skilled at being big stupid poopyface morons with an insistence on being retarded will do [SANE THING].

Some of the worst code ever written was the result of someone trying to be overly clever in how they implemented it. Readability is often heavily sacrificed.
As a rule of thumb, don't be clever; be explicit.
If you can't jump to a label but can jump to the end of a procedure, and you need to jump to a target, the natural solution is to make a procedure whose end is your target. It is an entirely reasonable solution to an entirely unreasonable problem that people create for themselves. It is a testament to how ingrained that stylistic choice is that one could consider pushing an address onto the stack followed by popping it into the PC register to be "explicit" compared to jumping to the address directly, which is "overly clever".

I mean this seriously, take a step back and look at how you are framing this. Not adding an extra level of indirection is so unthinkable to you that you have decided that it counts as turning your code into spaghetti solely for the sake of not adding an extra level of indirection. This doesn't happen anywhere else. Nobody wonders why isdigit takes int instead of int* or why malloc returns void* instead of void**. There's just no good reason not to do it that way. How can we even explain this response other than as a social phenomenon?
 
i think arrays should take indices that are unicode strings

arr[🍑] = 60;
arr[:eggplant:] = 7;
int i = [arr[🍑] + arr[
:eggplant:
];
cout << i; //67

PHP gotchu, fam.

Of course, nobody actually writes code like that.

PHP has labels and goto too, though I've never used them nor seen anyone else use them.
 
Not adding an extra level of indirection is so unthinkable to you that you have decided that it counts as turning your code into spaghetti solely for the sake of not adding an extra level of indirection.

I literally, just now, encountered a situation where a goto would have been perfect. Instead, I had to twist and contort the control flow around the lack of the feature.

snotang is 100% right here.
 
Now any control flow involving the containing procedure needs to be serialized into some sort of (documented) return value, then deserialized back into control flow.
And why is this a necessarily bad thing? Why are you seemingly allergic to shit that helps people quickly make sense of what you wrote?

Because this is just goto with extra steps. If you have a good reason to factor out the code, factor it out. If it's because you need a label to jump to but are scared of four letters, that's stupid. If the code in question needs to access and/or modify variables in the enclosing scope, you'll likely need to complicate the interface to your new procedure to the point that it will significantly increase the confusion of anyone trying to read it.
If you can't jump to a label but can jump to the end of a procedure, and you need to jump to a target, the natural solution is to make a procedure whose end is your target. It is an entirely reasonable solution to an entirely unreasonable problem that people create for themselves. It is a testament to how ingrained that stylistic choice is that one could consider pushing an address onto the stack followed by popping it into the PC register to be "explicit" compared to jumping to the address directly, which is "overly clever".
>Gee, scoping and value passing is such a pain. Let's just make everything effectively global in scope and jump around in a single monolithic function instead. It'll be so super cool and fast because muh program counter.

How can we even explain this response other than as a social phenomenon?
idk if you being too stupid/inexperienced to understand good practices and the practical benefits they bring is a social phenomenon. It's a you phenomenon.
 
I was hoping for more thread activity, but this isn't quite what I had in mind. Maybe that old monkey's paw is real after all...

Behold as GCC generates the same output for both a real while loop and a manual goto immitation.
https://godbolt.org/z/fGTWjrPf3 (-O0)
https://godbolt.org/z/4nenMMYTe (-O2)

Yes, you're both very smart for understanding that loops and functions are an abstraction over jumps, the compiler knows this too.
 
I mean, i've written a decent amount of g-code and goto functions are pretty much the only way to jump around the program and it seems to work fine.
Then again the code looks like this:
Code:
G00 G90 G17 G54 G40 G49 G80
G81 G99 Z-0.45 R0.1 F8. L0
G70 I1.25 J10. L6
 
Last edited:
It feels like you might be more familiar with PHP? That language decided to return False for some reason instead of null or none upon failure of a function

Not sure what you mean by "failure of a function" but this isn't quite how PHP works. If you specify a non-void return type for a function (as in C, "void" just means the function doesn't return anything) and then the function code doesn't explicitly return anything when it runs, a fatal error results. If you suck and don't specify a return type at all, then it's effectively the same as specifying a void return type. If you assign a variable using a function that has a void return type for some reason, the var will have a null value.
 
And why is this a necessarily bad thing? Why are you seemingly allergic to shit that helps people quickly make sense of what you wrote?
I'm beginning to see where our views diverge.

You are concerned with "quickly making sense" - getting a surface-level view of what is involved. Indirection is an advantage here, because it lets you introduce names (aside from labels, obviously) which allow for the concise communication of intent. This allows for it to read more like pseudocode and compacts the "big picture" into something that can fit on one screen.

I am concerned with precise behavior. When I read someone else's code, it is because something has broken and I need to determine exactly how it broke and how to fix it without significantly breaking compatibility. If I've narrowed the problem down to a procedure, the only difference that a certain portion of that procedure being factored out is going to make to me is that I need to do an extra search through the source tree to find its definition so that I can analyze its behavior, and then navigate back to the original procedure once I've finished that (but in practice I need to analyze the interaction as well, which means jumping back and forth). Intent is little help here because if the code matched the intent, I wouldn't need to be reading it in the first place, and at least one other person has already been fooled by reading the intent, following the usual assumption that most bugs are not intentional.

You know what can concisely communicate intent without having to add an indirection that makes it slower to analyze? A comment. You should be writing them anyway to document your additional procedure, but without needing to document a freshly-invented interface, it can be much shorter.
>Gee, scoping and value passing is such a pain. Let's just make everything effectively global in scope and jump around in a single monolithic function instead. It'll be so super cool and fast because muh program counter.
>If you have a good reason to factor out the code, factor it out. If it's because you need a label to jump to but are scared of four letters, that's stupid.

You missed the part where I said I think coolsville sucks btw. Nothing I have said has anything to do with performance. I understand that I can be rather dry and boring in my presentation, but that is no justification to hallucinate fictional interlocutors, even if you have more fun arguing with them. "There exist cases where a certain justification is not sufficient to factor into another procedure" passes into your eyes and "I hate all scoping and procedures and want everything to be monolithic" bounces around inside your head.

It has everything to do with choosing the right tool for the job. You cannot discuss the relative merits of two tools without discussing what they do. What a procedure call does is a strict superset of what goto does, and the functionality cannot be accurately described without a concept of a "current execution point", which is usually called a "program counter" or an "instruction pointer". I apologize if these phrases trigger some trauma buried deep in your past, but you will come out a stronger person for having overcome it. The point of comparing the underlying behavior is to demonstrate the absurdity of describing using goto as "going out of your way" when it is literally the opposite of going out of your way (both lexically and at runtime) compared to a procedure call, in much the same way as I do not "go out of my way" to avoid breathing through a straw. If you wish to do so, more power to you, but my contention has been all along that it is unreasonable to demand that I do the same.
idk if you being too stupid/inexperienced to understand good practices and the practical benefits they bring is a social phenomenon. It's a you phenomenon.
I've clarified my position repeatedly. It's a rather moderate one. If you legitimately believe that "good practices" means "no gotos ever, they shouldn't even be in any programming language", then you are just the horseshoe-theory-opposite of @Fat Camp Intern.
 
You know what can concisely communicate intent without having to add an indirection that makes it slower to analyze? A comment. You should be writing them anyway to document your additional procedure, but without needing to document a freshly-invented interface, it can be much shorter.
I really disagree, as a function call boundary is a great apot to have a limited amount of state that could have changed before and after. Which makes verifying if the error is inside or outside of it much easier. (assuming the function is well written of course)

Though, I definitely can see me agreeing with your view in rare circumstances, namely when someone goes WAY too heavy on "clean code" or does something like passing a struct by reference into that function and mutating it inside anyways, causing the the data it may have changed to explode in scope.
 
You are concerned with "quickly making sense" - getting a surface-level view of what is involved.
I am concerned with precise behavior. When I read someone else's code, it is because something has broken and I need to determine exactly how it broke and how to fix it without significantly breaking compatibility.
Are these two concerns mutually exclusive?

Nothing I have said has anything to do with performance.
It is a testament to how ingrained that stylistic choice is that one could consider pushing an address onto the stack followed by popping it into the PC register to be "explicit" compared to jumping to the address directly, which is "overly clever".
If this isn't a point about performance, then how is it at all relevant?

You're writing C, not assembly.
 
Last edited:
Raymond says the days of hand coding are mostly over.
erstweet.png

Choice of language for software projects has become a very different game now that we have robot friends to do most of our code generation and translation for us.

I have people wondering why I just shipped a project in Rust when I don't like the language and don't hand-code in it myself. I did this because I am adjusted to current reality, and now I'm going to talk about that.

The age of hand-coding is mostly over. It no longer matters as much whether the computer language I use is comfortable to my hand, only whether the robot friend I'm using can generate it at high quality.

It also matters whether I can read the language, because I am going to want to run my eyeball over it to review the code. Rust meets that bar - I find it kind of spiky but basically readable.

Rust is a good deployment language for me to choose when (a) I want solid memory-safety guarantees, and (b) the code is already mature and I don't expect to need to do exploratory programming or serious feature development on it in the future.

In particular, this makes Rust a good place for me to land my old C projects. Which is why in the last couple of months I have migrated two of them to Rust. C to Rust translation by robots is cheap and easy now; I will probably continue to do this. Each time I get a bug report on one of these projects in the future, boing! Rusticated.

You may believe that Rustacea is stuffed with Communists and sexual deviants. You might even be right. I don't have to care whether that's true anymore, because I have a robot friend who is in all relevant ways smarter than they are.

The wider lesson here is that the developer and user community around a language doesn't matter as much as it used to in whether you should get involved with it. Because in the future, we're going to be relying on human community brains less and artificial intelligences more. And that future is now.

Not everything C gets moved to Rust, though. I lifted cvs-fast-export to Golang instead, because I think it's fairly likely that I'm going to have to do significant development work on it is in the future, so the payoff from a language I'm more comfortable reading and modifying by hand goes up.

I'm certainly never going to start a project in C again. What would be the point, other than masochism? I spent 40 years writing C and I'm very good at it, but I will cheerfully leave it and it's buffer overruns and its heap corruption and its undefined behaviors and its portability problems behind.

It helps that my robot friends are good at writing C code that doesn't have those problems, but...why even go there? Why expose yourself to those risks if the robot misses something?

These days I do my exploratory programming in Python or Golang. My robot friends are extremely good at generating code in both those languages. I think they're slightly higher leverage on Golang, possibly due to that language having a smaller surface?

Python used to be my favorite language. I soured on it for a while after the 2-to-3 transition was massively botched, and the GIL meant concurrency in it was a disaster area, and managing library dependencies became an even bigger disaster area. I'm a little happier with Python now that I can declare strict typing and uv has reduced dependency pain somewhat.

But I think if I think I'm going to have to write anything much larger than a glue script in Python, I just shrug and reach for Golang instead. I'm very comfortable in Golang. Over time, I'll probably migrate my older Python projects to Golang because that's cheap and easy now and the performance win can be quite significant.

I don't know what other languages I'm going to be using in the future. I do know that choosing a development language is a much less grave commitment than it used to be, because if it turns out to be not well suited for the job I'm doing, I can simply have my robot friend translated to a better one.

Uncle Bob chimes in.
1778683971767.png
Authoring code by hand HAS GONE AWAY.

Engineering module structure and architecture has not.
 
Last edited:
Just one more GPT release bro. Trust me bro
The main reason I am using Agenic coding is because most of the stuff I am building is simply just pulling stuff out of a database and encoding it to JSON. The more complex the task you give it, the more likely it is basically get confused and I am using some latest and greatest models. In that situation you have to basically guide it through the solution, which requires knowing how to program in the first place.
 
Back
Top Bottom