Programming thread

  • Want to keep track of this thread?
    Accounts can bookmark posts, watch threads for updates, and jump back to where you stopped reading.
    Create account
Do you already have a backend? If not I'd recommend Svelte, if you dont mind the node ecosystem. While its still a js framework its a bit different than React, hard to explain im neither a webdev nor a programmer really just ask ai or google for explanation.

it includes its own backend so i'd open a SSE connection when the client connects, use a remote function to update the counter i can give you code examples if you want.

Otherwise use the backend of your choice and htmx frontend?
IMO the asp.net backend is one of the best ones out there. But HTMX is pretty goated indeed.
 
Do you already have a backend?
No, no code has been written yet as I like to autistically plan everything out as much as possible to a fault before writing anything. My background in programming is embedded/real time systems work so I was hoping to stick with C++ for the backend with CrowCPP or possibly another C++ framework, but if it would be significantly easier in Svelte I will look into it.

As for HTMX, I looked into it after making my post and it looks very interesting. After a little digging it seems that this is exactly what I need. Thanks.
There is no magic,
Glad to know I am on the right track with web sockets. I did not know about long polling so thanks for the info.
I've found they'll behave themselves
I think I am in some sort of A/B test hell where they are starting to advertise and promote topics unnecessarily. The coding specific tools do seem to be better about this though.
 
It's a scam,but you can make money. I've worked for Leapforce before which was the same thing for search engines. You get in and then get a set number of tasks. Once in a while they will ship in a test case they rated themselves, and if you miss on it you're banned or get limited without feedback. The real ones remember the fried Guinea Pig test back in the day, that wiped half the testers out. I took a break that month, only to find nobody cool was left in the company chat. There were tons of those companies back in the day.
Ahhh, good to know. So even just not showing up for work or missing a secret test can get you canned.
I've learned over the years that if it isn't a job where you are building something for someone, it is a scam of some sort.
They were advertising one of these gigs on a job board I am on and the moment I saw a variable rate with zero being the lowest per day, I know it is a scam.
I have a theory that they are actually using the human applicants themselves to train AI models, and the gig work is just a ruse. They want data on how people think, how they fidget with their mouse when under pressure, all of it.
I am not a web dev, so please bear with me, but what is the best way to currently "display a value and sync its current value" with a backend. When I look this up online I see either mega huge sized Javascript libraries to do this, or to make a simple Javascript loop that polls an API route every few seconds. There must be a better way.

Basically I want a text label with up and down arrow buttons next to it, and when the user presses up and down the counter increments by one, and anyone else on the same webpage from a different device sees the change in hopefully as real time as possible. I think maybe I could use a web-socket to a backend and then sync data through that (IE: Button Press event to backend, backend propagates to other web-socket connections), but I'm trying not to overthink it as I have never written Javascript before so hopefully there is an easier way.

When I ask any of the LLM chatbots for help they jump into React and I'd rather blow my head smooth off than make a 50 mb webpage for something that should be simple.

The simplest solution to this is exactly how you'd imagine it to be between any two concurrently running programs, with the sole difference being that the data is communicated over the internet.

All that really involves is utilizing your hardware's TCP/IP layer. Which, lucky for you, is basically built into every device now.

There is already a library built for interacting with this layer in C. It's the Berkeley Sockets library. Don't be confused with stupid terms like "websockets" or whatever. Literally all HTTP is is an abstraction layer using sockets. A socket is literally just a struct that interacts with and prepares TCP/IP packets. You assign it a port and an IP address and some other options. Boom, done.

You can build your own layer, your own custom data schema too. The signal for an arrow press can be as simple as 1 bit: 1 for up, zero for down. You don't need to send anything else. It's just data. YOU interpret what that means.

Obviously you will need to check on both ends to see if any new messages have been received from the client or host. This is where the idea of "polling" comes in. You check for messages every so often and queue up responses. Your hardware deals with the real queue happening underneathe, but youll probably have a program layer queue if you use some polling library.

If you built a simple client like this you can basically have real time syncing with latency as low as the ping time between your client and server and back. And why wouldn't this be the case? Light travels pretty damn fast lol. It's a little painful when "web devs" chime in and say to check every few minutes lmao. You don't need a 500MB framework to send a bit over the wire lol so your Intuition is spot on.

Protip: ditch the ++ and just learn C. You can still write "C style C++" (lmao) if you absolutely insist.
 
Last edited:
Organisations have been doing this for decades with psychometric tests. They already have all the data they need on this.
There is something like a million applicants per year on DataAnnotation.tech alone. With that kind of datapool, and access to modern devices that we certainly didn't have decades ago, I can't see these companies not harvesting user data.
By miss I mean failing.
Funnily enough, I found an article somehow magically merging the subject matter of both your comments lol...


"his paper analyzes The Guinea Pig Eligibility Test, an interactive digital artwork that discusses the ethics of biometric data collection and clinical research through participatory play. Framing consent as a gamified experience, the work exposes how systems reward vulnerability while masking mechanisms of control. Through subtle manipulation, participants are led to disclose personal data, ultimately resulting in the biometric capture and transformation of their faces into guinea pig–human hybrids, which are then placed in a collective “farm” environment. Drawing on theories of surveillance capitalism, biopolitics, and tactical media, this paper examines how the piece makes visible the power dynamics underlying digital interfaces and research protocols. It argues that this piece functions not only as an artistic critique but as an experiential ethical intervention that forces audiences to confront complicity, dehumanization, and the commodification of identity."
 
There is something like a million applicants per year on DataAnnotation.tech alone. With that kind of datapool, and access to modern devices that we certainly didn't have decades ago, I can't see these companies not harvesting user data.
What I am saying is that all of this has been known for quite a while. I don't understand why you would bother harvesting a load of data when the results are already understood.
 
Last edited:
What I am saying is that all of this has been known for quite a while.
I understand what you're saying but I think you're mistaken. Some of it has been known, sure. But definitely not all. The type of data they're collecting has never been collected before. Like I said, they didn't have the devices back then that they do now. They didn't have the data annotation software. They didn't know which metrics to even record. A psychometric test compared to what they are collecting today is like... comparing a lego toy set to a jet engine. It doesn't even come close.

We are talking about millions of humans measured in real time on dozens of metrics both biological and cognitive when presented with custom tailored tasks that of themselves have near infinite variety. They could divide the applicants into test groups based on any variety of metrics like location, age, languages spoken, race, present different tasks to different groups at different times of day... the data points are nearly endless.

EDIT: a key point of clarification: in the past, they were able to study and predict how humans reacted. This is what data harvesting through cellphones catered to. This new paradigm of data harvesting, however, is allowing them to model how humans actually think when given cognitive tasks. The "gig work" is just a ruse for that. Maybe they're training human-like agents with the data. Maybe they'll use it to detect individuals just based on their thinking alone. Who knows, it's pretty far reaching.
 
Last edited:
I understand what you're saying but I think you're mistaken. Some of it has been known, sure. But definitely not all. The type of data they're collecting has never been collected before. Like I said, they didn't have the devices back then that they do now. They didn't have the data annotation software. They didn't know which metrics to even record. A psychometric test compared to what they are collecting today is like... comparing a lego toy set to a jet engine. It doesn't even come close.

We are talking about millions of humans measured in real time on dozens of metrics both biological and cognitive when presented with custom tailored tasks that of themselves have near infinite variety. They could divide the applicants into test groups based on any variety of metrics like location, age, languages spoken, race, present different tasks to different groups at different times of day... the data points are nearly endless.

EDIT: a key point of clarification: in the past, they were able to study and predict how humans reacted. This is what data harvesting through cellphones catered to. This new paradigm of data harvesting, however, is allowing them to model how humans actually think when given cognitive tasks. The "gig work" is just a ruse for that. Maybe they're training human-like agents with the data. Maybe they'll use it to detect individuals just based on their thinking alone. Who knows, it's pretty far reaching.
That's so skitzophrenic, that I can believe it.
 
I understand what you're saying but I think you're mistaken. Some of it has been known, sure. But definitely not all.
You have to ask yourself what is more likely. AI training contracts is probably in high demand right now. Data about how people react under certain stress/time limits is already well understood and people aren't throwing money at this stuff. There is far more money in the former than the latter as it is currently in high demand and people are throwing money at anything AI related.
 
You have to ask yourself what is more likely. AI training contracts is probably in high demand right now. Data about how people react under certain stress/time limits is already well understood and people aren't throwing money at this stuff. There is far more money in the former than the latter as it is currently in high demand and people are throwing money at anything AI related.
What you're calling the latter is the real former. The actual AI training taking place is to train human-like models by analyzing the applicants applying for this type of gig work. The gig work is just a ruse. They are throwing money at these companies for this data. That's what I'm saying.

people are throwing money at anything AI related.
Exactly. Anything AI related, like using human workers as Guinea Pigs...

I know this seems far-fetched to some but you have to read up on the numerous stories written by some of these applicants. Gemini, for instance, requires it's applicants to provide real voice recordings... ask yourself what is more likely: they're verifying the applicant is a real person, or selling/training on their data.
 
What you're calling the latter is the real former. The actual AI training taking place is to train human-like models by analyzing the applicants applying for this type of gig work. The gig work is just a ruse. They are throwing money at these companies for this data. That's what I'm saying.
I don't think anyone would bother with this data for training AI that behave like humans. More likely it would be for psyops/propaganda purposes. Basically AI would be used to determine psychological profile. I.E. how you move your mouse might be indicative of how neurotic are you etc.

Though I don't see purpose for paying people for that, people already give away it all for free. Captchas are used to harvest that types of data. Some kind of natural language processing model could determine your political beliefs from your social media posts, or from type of videos you watch on how long you pay attention to them.
And that's something that's already being collected all the time.

What's scary is they could run targeted propaganda projects using organic content with sites like youtube. You just have to measure what types of videos affect what kind of demographics. Then just recommend correctly to slight nudge towards "desired" opinions/values. You don't have to come up with ways to talk to people. You will just put a spotlight onto people who actually believe what they say, which is much more convincing.
That's pretty much what Google's "The Selfish Ledger" was about. And it's now decade old.
 
I know this seems far-fetched to some but you have to read up on the numerous stories written by some of these applicants. Gemini, for instance, requires it's applicants to provide real voice recordings... ask yourself what is more likely: they're verifying the applicant is a real person, or selling/training on their data.
Unless there is actual evidence of what you claim (not what some people reckon). I will go with the simpler explanation.

Many scenes that are outside the norm lean to conspiracy, a lot of the time this is because of valid reasons, but I've seen it go off the rails plenty of times. People start repeating other people's stories and then it becomes a fact. When you look into it, you find out, there is very flimsy evidence or none at all. I am generally fed up with hearing such stories because that what starts off this loop.

Furthermore, I've seen quite a few places now require a real recording / face-to-face to prove you are a real person, which have nothing to do with this industry e.g. recently US companies have been asking Korean applicants to say something negative against Kim Jung Un to weed out North Korean spies.

So it doesn't sound far-fetched at all. I have to go through a bunch of background checks for most of the work I do anyway.
 
There is already a library built for interacting with this layer in C. It's the Berkeley Sockets library. Don't be confused with stupid terms like "websockets" or whatever. Literally all HTTP is is an abstraction layer using sockets. A socket is literally just a struct that interacts with and prepares TCP/IP packets. You assign it a port and an IP address and some other options. Boom, done.

I don't think you know what a websocket is - which is fair because the name is kind of misleading. It's a persistent (effectively, at least) connection between a browser and a web server that both sides can send and receive data through, but it's on the same networking layer as HTTP - not as low level as actual sockets, and not used for the same purpose, as web browser engines (at least currently…) cannot connect to arbitrary sockets, so doing this with BSD sockets rather than a websocket would not be possible. Two programs could run on the same device using websockets for IPC rather than BSD sockets, but that would be retarded (so I'm sure there's some bundle of Node programs that's doing this shit by default).
 
I don't think you know what a websocket is - which is fair because the name is kind of misleading. It's a persistent (effectively, at least) connection between a browser and a web server that both sides can send and receive data through, but it's on the same networking layer as HTTP - not as low level as actual sockets, and not used for the same purpose, as web browser engines (at least currently…) cannot connect to arbitrary sockets, so doing this with BSD sockets rather than a websocket would not be possible. Two programs could run on the same device using websockets for IPC rather than BSD sockets, but that would be retarded (so I'm sure there's some bundle of Node programs that's doing this shit by default).


Your comment perfectly illustrates the ridiculous state of the "web" today. And is why web devs are universally mocked in every field of programming.

You are literally taking a fully functioning abstraction that already exists - a socket - building it up into a bloated, convoluted abstraction known as HTTPS - which is LITERALLY what the entirety of modern day websites are communicating with browsers through - then because you don't know how a socket works to begin with, or even what it is - you then create ANOTHER abstraction on top of that abstraction layer - a web socket - to try and reinvented WHAT YOUR ENTIRE FIELD IS ALREADY USING UNDERNEATHE.

YES YOU CAN USE A FUCKING SOCKET ON A WEB BROWSER. HOLY FUCKING SHIT WEB DEV STFU.
 
YES YOU CAN USE A FUCKING SOCKET ON A WEB BROWSER. HOLY FUCKING SHIT WEB DEV STFU.

Can you, now? Please go ahead and show me how I could connect to an IRC server from browser JavaScript, without using some sort of server-side translation layer.

I don't disagree that we're using web engines for very wrong purposes nowadays but if you're going to make that argument at least understand it.
 
Unless there is actual evidence of what you claim (not what some people reckon). I will go with the simpler explanation.

Many scenes that are outside the norm lean to conspiracy, a lot of the time this is because of valid reasons, but I've seen it go off the rails plenty of times. People start repeating other people's stories and then it becomes a fact. When you look into it, you find out, there is very flimsy evidence or none at all. I am generally fed up with hearing such stories because that what starts off this loop.

Furthermore, I've seen quite a few places now require a real recording / face-to-face to prove you are a real person, which have nothing to do with this industry e.g. recently US companies have been asking Korean applicants to say something negative against Kim Jung Un to weed out North Korean spies.

So it doesn't sound far-fetched at all. I have to go through a bunch of background checks for most of the work I do anyway.

This has nothing to do with conspiracy you retarded boomer. STFU. There an countless reports of shady activity from these companies. You call them "scams", but somehow lack the mental fortitude to ask yourself how these scams actually make money off of you, the victim. Lmao. It's from harvesting your data you fuckwit.

Jesus fucking christ. unless any of you codemonkeys and skids have created the type of software I've created from scratch, shut the actual fuck up. I've reverse engineered mobile apps, hacked video games, built a phone farm that I could remotely control from a central hub and inject commands into hundreds of devices with a single click. You fuckers have used javascript frameworks. Shut. the. fuck. up.

C > C++, nuff' said.
 
Jesus fucking christ. unless any of you codemonkeys and skids have created the type of software I've created from scratch, shut the actual fuck up. I've reverse engineered mobile apps, hacked video games, built a phone farm that I could remotely control from a central hub and inject commands into hundreds of devices with a single click. You fuckers have used javascript frameworks. Shut. the. fuck. up.
lmao, tuff
C > C++, nuff' said.
lmao, no RAII
 
i lowkey want to actually try algol 68 now that gcc 16 has released
it has a few funny features like spaces in variable names and having a different terminology than other languages (types are called modes for example)
but ga68 is currently NOT present in the arch repositories

also the person responsible for the a68 gcc frontend hosts his shit (like his a68-mode fork) on sourcehut :crybleed:
 
lmao, no RAII

Translation: I am deeply afraid of memory, so never learned how to manage it

I blame the jews for this 100%. (((Stroustrup))) Talk about conspiracy, our educational system has been infiltrated by communists who don't want people understanding how to allocate and free memory without crashing a system lol (note - it's really not fucking difficult)
 
Back
Top Bottom