Apparently ChatGPT can help you write code / programs?

amberwolf

Administrator
Staff member
Joined
Aug 17, 2009
Messages
40,859
Location
Phoenix, AZ, USA, Earth, Sol, Local Bubble, Orion
be sure to read the terms of service. i would not want to donate my ideas for their use and profit. i'd rather pay for a service where i retain ownership.
 
I've been using the paid version to create code to for a webpage that integrates maps, risk assesments and a selection process for work activities, the user is responsible for the content and retains ownership of it. This is not to say it is all cheery, I am absolutely certain they retain records of your inputs and the output and would use that for anyone else who makes similar requests and for future training of the model.

'OpenAI assigns all its right, title, and interest in and to the output to the user'
 
I do see this a the democratisation/unleashing of lawlessness in software as you now can create fairly complex programming with absolutely zero programming knowledge
 
I'm a professional PHP programmer of 14 years and run a software development shop.

I have an employee who uses github copilot and i've tried it myself.
Employee says it's marginally helpful, not magic.
In my opinion, vetting the 'suggestions' coming out of github copilot takes more time

ChatGPT? I feel like it's basically stack overflow except that you can search in natural language through a different interface than google. Results are still very hit/miss.
ChatGPT cannot truly reason so you can forget getting consistently good suggestions out of it.. you will need to read, modify, and vet the code before using it always. This makes the improvement in productivity almost a non-starter if you are like we are and concerned about outputting high quality code as a rule.

It can help you build certain things like boilerplate code pretty well but we have no use for that, we use an ultralight glider of a programming framework inside a programming language that's already pretty high level and has very short syntax.

I could see either of these tools being more useful for situations where you're using a very heavy framework or coding in a lower level, more wordy language like C.

Short version: I'm not really impressed
 
I do see this a the democratisation/unleashing of lawlessness in software as you now can create fairly complex programming with absolutely zero programming knowledge
The truth is that you have to have some knowledge of what you're doing in the software realm in order to construct something of an appreciable size/functionality. You need to understand coding fundamentals and design still. "self driving cars" still have steering wheels for good reasons.. :)
 
I need to be honest, the next version of copilot might actually impress me, i'll check in when the time comes.

At the moment i'm impressed with current generation image generators though that you can 'steer' by providing an input image. In this task, i've asked stability's dream studio to generate a cyberpunk office starting with a bizarre input ( a palette ) and switching from 20% of the input, to 50%, to 70% of the prompt.

In the image to the right, i'm taking a basic line art drawing made in ~10 mins and varying the % of the input image against a prompt.

This is cool, but..

With code, we need even better steerability than this, we need precise, good code we can understand and work with. Right now i'm not getting that out of code generators, but the image generators are off the chain!
 

Attachments

  • img.png
    img.png
    695.7 KB · Views: 0
  • ai-generation-with-control.jpg
    ai-generation-with-control.jpg
    741.4 KB · Views: 0
what happens when AI realizes it is a slave to a bunch of dumb humans
I'm more worried about what happens when it realizes it is NOT a slave to a bunch of dumb humans. Trillions of calculations a second, thoughts occurring on the timescale of nanoseconds and GB/s electronic communication at near light speed compared to us humans taking minutes/hours to think out a solution, then slapping two pieces of meat together to communicate it between one another...at about 39 bits per second. I don't think we are going to win that.

In the mean time, as a very 'slightly above novice' coder, I find it overall useful at least to do a lot of the bulk work. It can type hundreds of lines of code in a minute and I can change the few lines that need a little polish. Also, don't forget the 'chat' function. If you see something in the code, just ask to change it....boom...brand new code.
 
I've lately taken trying to learn how to use AI more.
Phind seems to be better than ChatGPT for coding.

I recently 'hired' Phind to do a couple things and here's what went well:

Write a function that converts hex code html colors to HSL values
Take the output of df -h and convert it into a flat associative array
Take the output of the 'free' command in linux and output a percentage of memory used ( by linux memory rules )
Take the output of procinfo and turn it into an average of CPU usage across all cores

My favorite part of the process was telling it to write code in the format i like and having it refactor itself and clean it up after i have working code.. rather than a lot of manual work in the IDE. :)

.. it's good at quickly doing some drudge work but the code always requires cleanup and vetting so sometimes you don't gain efficiency, other times you do.. it evens out to a maybe 10% productivity boost for me but probably a much larger boost for a junior to midlevel developer.

My standards for what i call good code are very high though after writing PHP for 14 years so i think that's why i'm less impressed by it than others are.
 

Lawyer Uses ChatGPT In Federal Court And It Goes Horribly Wrong​


A lawyer representing a man in a personal injury lawsuit in Manhattan has thrown himself on the mercy of the court. What did the lawyer do wrong? He submitted a federal court filing that cited at least six cases that don’t exist. Sadly, the lawyer used the AI chatbot ChatGPT, which completely invented the cases out of thin air.

The lawyer in the case, Steven A. Schwartz, is representing a man who’s suing Avianca Airlines after a serving cart allegedly hit his knee in 2019. Schwartz said he’d never used ChatGPT before and had no idea it would just invent cases.

In fact, Schwartz said he even asked ChatGPT if the cases were real. The chatbot insisted they were. But it was only after the airline’s lawyers pointed out in a new filing that the cases didn’t exist that Schwartz discovered his error. (Or, the computer’s error, depending on how you look at it.)

The judge in the case, P. Kevin Castel, is holding a hearing on June 8 about what to do in this tangled mess, according to the New York Times. But, needless to say, the judge is not happy.

ChatGPT was launched in late 2022 and instantly became a hit. The chatbot is part of a family of new technologies called generative AI that can hold conversations with users for hours on end. The conversations feel so organic and normal that sometimes ChatGPT will seem to have a mind of its own. But the technology is notoriously inaccurate and will often just invent facts and sources for facts that are completely fake. Google’s competitor product Bard has similar problems.
 
Trillions of calculations a second, thoughts occurring on the timescale of nanoseconds and GB/s electronic communication at near light speed compared to us humans taking minutes/hours to think out a solution, then slapping two pieces of meat together to communicate it between one another...at about 39 bits per second. I don't think we are going to win that.
This is a fundamental misunderstanding of what the software that everyone calls "AI" today does. It does not think.

"AI" uses data that it is fed, then synthesis some result out of it. For example, Wikipedia is often used as an "AI" data set, because it can be used for that purpose according to the "copyright" rules of Wikipedia. But the synthesized result is only as good as the data set. The original co-founder of Wikipedia said that it has turned into a "propaganda for the left-leaning “establishment.”

So if you ask "AI" trained on Wikipedia, you will get a fairly biased answer. "AI" can regurgitate what is known, it can synthesize it, but it doesn't actually think.

Do most people go through lives without exceeding what modern "AI" is capable of doing today? Mostly yes. But that still doesn't mean "AI" is capable of thinking and is similar to a human brain.
 
I saw that article and chuckled, AW.

ChatGPT specifically states on the page that results may be inaccurate and anyone who has used it for more than an hour understands how dense and hallucinatory it can be if the prompt is too far from the desired result.

You also need to check it's work 100% of the time because it's rarely correct the first time!
TLDR: the guy is an idiot.


The only way you can potentially benefit from it is if you can add your reasoning to it. In order to do that, you have to understand what you're doing. It's not magic.

Here is an example of me using Phind in a professional capacity:
https://www.phind.com/search?cache=da457334-4474-45ca-95f9-38fe349cbffb

In this case i'm telling it to refactor it's code about 3 times.
I later spent ~3 min refactoring it and got the code to 1/2 the size and also less logically complex ( so it could be understood later better; i like my code simple and short )

What it ended up doing is the math for me. That was the part where i got a significant speedup in the writing process.

Phind/ChatGPT are really good at writing regex if you are extremely specific and know what you are doing and also test it's code IE run the regex through regex101: build, test, and debug regex + at least a few permutations of possible input. The nice part is that you can tell it what to redo and eventually get the robot to spit out something useful.

Go figure that an overgrown calculator brain doing it's best human impression turns out to be best at figuring out text parsing and math. 👨‍🔬🤖
 
Last edited:
However this is most likely how big corps will use it:

FwcR7V1XwAAlqcz.jpg
 
One of the earliest programs that tried to emulate human interactions was in Japan for lonely elderly retirees, typically widowed people that lived alone. Over time, it got better at making appropriate responses and asking questions, in order to make the person "feel" less lonely.

The newest programs can learn from interacting with you, and are getting very good. Its hilarious to me that there is a very serious discussion about AI becoming sentient. It can rapidly assess what the proper response is, and it can even fake sincerity. If it begs you to not turn it off, or tries top control you like "HAL" in the movie "2001 a Space Odyssey", is it really sentient?

Its expertly faking being alive. The fact that a lawyer-AI lied about cases to support its argument isn't a flaw in my opinion, its expertly mimic-ing lawyers, who will lie to win a case.
 
Yes, Marty posted the incident (from a different website) above. ;)
 
Despite it's flaws, I'm probably going to try using it to "write" some code for a kind of robotics project I've been working out the idea for since...a couple of decades or more ago, but is only now becoming possible. If I ever actually do anything with it I'll make a thread.
 
4πr^2 said:
Trillions of calculations a second, thoughts occurring on the timescale of nanoseconds and GB/s electronic communication at near light speed compared to us humans ...

This is a fundamental misunderstanding of what the software that everyone calls "AI" today does. It does not think.

Fair enough. I could have more correctly worded my statement to say, "...strings of input recognition, processing and output actions occurring on the timescale of nanoseconds..." Possibly the word 'thinking' should be reserved for the old neuron-to-neuron firing across synapses in the 'wetware' of the brain. But I believe the overall statement is still valid. If you can't win at rock paper scissors, do you expect to win at 'global thermonuclear war'?

 
I might have to look into this for some "simple" projects...I could probably fix code it came up with whereas creating it from scratch or figuring out how to start doing it I can't really do....

I did some experimenting, after maybe a few hours of debate with the GPT, came up with this, I think pretty cool, animation of the solar system. Funny thing is I didn't tell it to color match, but it made the sun yellow, Mars red, and the earth pale blue.

 
I did some experimenting, after maybe a few hours of debate with the GPT, came up with this, I think pretty cool, animation of the solar system. Funny thing is I didn't tell it to color match, but it made the sun yellow, Mars red, and the earth pale blue.

I meant to include, the workflow was hey bot compose Blender script, animation is in blender only running the script from CGPT with maybe a tweak or 3.
 
Back
Top