Author Topic: Does anyone tried to use OpenAI and ChatGPT to generate Verilog code?  (Read 8430 times)

0 Members and 1 Guest are viewing this topic.

Offline radiolistenerTopic starter

  • Super Contributor
  • ***
  • Posts: 4064
  • Country: ua
Just tried it and it looks that it can generate very nice Verilog code.
You just asking it what you're needs exactly and AI write code and explains how it works.

It also can write code for CPU in any programming language... If task is not huge it produces a nice and working code, but it has issue to generate large code, it starts to doing unwanted things and when you ask to fix one issue, it fix one issue but add another issue...  :D

just fantastic, it knows everything - math, DSP, physics, electronics, and you can talk with it like with human....  8)
 

Offline woofy

  • Frequent Contributor
  • **
  • Posts: 371
  • Country: gb
    • Woofys Place
Re: Does anyone tried to use OpenAI and ChatGPT to generate Verilog code?
« Reply #1 on: February 24, 2023, 09:44:26 am »
ChatGPT is impressive. Certainly the best natural language processor I've ever seen. But for all that its just a github scraper with no real understanding of what its generating. Take a look at Dave's latest Vid, where ChatGPT generates C code and makes a fundamental mistake in corrupting a variable it later relies on. And with Dave not noticing the error til later in the Vid, attempts to guide a correction just produces more errors and even non-sense.

I remember back in the 80's there was a code generator program that was marketed as "The last program you will ever need". Also impressive for the time, but ultimately useless. ChatGPT is far better than that but any programmer using it will still need to get their hands dirty.

Still, one day when these AI's can run/test and debug the code.. .   .
It's a brave new world.

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9940
  • Country: us
Re: Does anyone tried to use OpenAI and ChatGPT to generate Verilog code?
« Reply #2 on: February 24, 2023, 11:49:07 pm »
I used ChatGPT to generate PyTorch and TensorFlow solutions to the standard MNIST digit recognition problem.  Both versions worked perfectly even when responding to a typo in the query:

Code: [Select]
usinf pytorch with python, write code to solve digit recognition problem

This same query was used to pose the Tensor Flow solution.

TensorFlow solution:
Code: [Select]
import tensorflow as tf
from tensorflow.keras.datasets import mnist
import time

start_time = time.time()

# Load MNIST dataset
(x_train, y_train), (x_test, y_test) = mnist.load_data()

# Normalize the data
x_train, x_test = x_train / 255.0, x_test / 255.0

# Define the neural network architecture
model = tf.keras.models.Sequential([
    tf.keras.layers.Flatten(input_shape=(28, 28)),
    tf.keras.layers.Dense(128, activation='relu'),
    tf.keras.layers.Dropout(0.2),
    tf.keras.layers.Dense(10)
])

# Define the loss function and optimizer
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
optimizer = tf.keras.optimizers.Adam()

# Compile the model
model.compile(optimizer=optimizer, loss=loss_fn, metrics=['accuracy'])

# Train the model
model.fit(x_train, y_train, epochs=5, batch_size=64)

# Evaluate the model on test data
model.evaluate(x_test, y_test)
print("--- %s seconds ---" % (time.time() - start_time))

PyTorch solution:
Code: [Select]
import torch
import torch.nn as nn
import torchvision.datasets as datasets
import torchvision.transforms as transforms
import time

start_time = time.time()

# Define hyperparameters
batch_size = 64
learning_rate = 0.001
num_epochs = 10

# Load MNIST dataset
train_dataset = datasets.MNIST(root='./data', train=True, transform=transforms.ToTensor(), download=True)
test_dataset = datasets.MNIST(root='./data', train=False, transform=transforms.ToTensor())

# Create data loaders
train_loader = torch.utils.data.DataLoader(dataset=train_dataset, batch_size=batch_size, shuffle=True)
test_loader = torch.utils.data.DataLoader(dataset=test_dataset, batch_size=batch_size, shuffle=False)

# Define the neural network architecture
class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.conv1 = nn.Conv2d(1, 16, kernel_size=3, stride=1, padding=1)
        self.relu1 = nn.ReLU()
        self.maxpool1 = nn.MaxPool2d(kernel_size=2, stride=2)
        self.conv2 = nn.Conv2d(16, 32, kernel_size=3, stride=1, padding=1)
        self.relu2 = nn.ReLU()
        self.maxpool2 = nn.MaxPool2d(kernel_size=2, stride=2)
        self.fc = nn.Linear(7*7*32, 10)

    def forward(self, x):
        x = self.conv1(x)
        x = self.relu1(x)
        x = self.maxpool1(x)
        x = self.conv2(x)
        x = self.relu2(x)
        x = self.maxpool2(x)
        x = x.view(x.size(0), -1)
        x = self.fc(x)
        return x

# Instantiate the neural network
net = Net()

# Define the loss function and optimizer
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(net.parameters(), lr=learning_rate)

# Train the neural network
for epoch in range(num_epochs):
    for i, (images, labels) in enumerate(train_loader):
        # Forward pass
        outputs = net(images)
        loss = criterion(outputs, labels)
       
        # Backward and optimize
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()
       
        # Print loss every 100 steps
        if (i+1) % 100 == 0:
            print('Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}'.format(epoch+1, num_epochs, i+1, len(train_loader), loss.item()))

# Test the neural network
net.eval()
with torch.no_grad():
    correct = 0
    total = 0
    for images, labels in test_loader:
        outputs = net(images)
        _, predicted = torch.max(outputs.data, 1)
        total += labels.size(0)
        correct += (predicted == labels).sum().item()

    print('Accuracy of the network on the 10000 test images: {} %'.format(100 * correct / total))

print("--- %s seconds ---" % (time.time() - start_time))

The only changes I made was to add the runtime timing.

I don't know where ChatGPT got the code and I don't care.  It works and it saved me a lot of typing.  FWIW, the TensorFlow solution runs a LOT faster and is much shorter.
« Last Edit: February 24, 2023, 11:51:10 pm by rstofer »
 

Online ralphrmartin

  • Frequent Contributor
  • **
  • Posts: 493
  • Country: gb
    • Me
Re: Does anyone tried to use OpenAI and ChatGPT to generate Verilog code?
« Reply #3 on: February 25, 2023, 11:03:25 pm »
I asked it for a linear time algorithm for solving the travelling salesman problem, to see what would happen.

After some preamble, it said "here is a quadratic algorithm", output some code, and then said "that is thus a linear solution". Utter nonsense.

(OK, not verilog, but it clearly demonstrates why blindly using ChatGPT to generate code is a bad idea).
 

Offline NorthGuy

  • Super Contributor
  • ***
  • Posts: 3251
  • Country: ca
Re: Does anyone tried to use OpenAI and ChatGPT to generate Verilog code?
« Reply #4 on: February 26, 2023, 05:52:07 pm »
If task is not huge it produces a nice and working code, but it has issue to generate large code, it starts to doing unwanted things and when you ask to fix one issue, it fix one issue but add another issue...  :D

Just like humans :)
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf