欢迎来到尧图网

客户服务 关于我们

您的位置:首页 > 教育 > 培训 > Day.34

Day.34

2025/8/23 19:22:24 来源:https://blog.csdn.net/m0_64714591/article/details/148718446  浏览:    关键词:Day.34

优化耗时:

import torch

import torch.nn as nn

import torch.optim as optim

from sklearn.datasets import load_iris

from sklearn.model_selection import train_test_split

from sklearn.preprocessing import MinMaxScaler

import time

import matplotlib.pyplot as plt

device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")

print(f"使用设备: {device}")

iris = load_iris()

X = iris.data  

y = iris.target  

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

scaler = MinMaxScaler()

X_train = scaler.fit_transform(X_train)

X_test = scaler.transform(X_test)

X_train = torch.FloatTensor(X_train).to(device)

y_train = torch.LongTensor(y_train).to(device)

X_test = torch.FloatTensor(X_test).to(device)

y_test = torch.LongTensor(y_test).to(device)

class MLP(nn.Module):

    def __init__(self):

        super(MLP, self).__init__()

        self.fc1 = nn.Linear(4, 10)  

        self.relu = nn.ReLU()

        self.fc2 = nn.Linear(10, 3)  

    def forward(self, x):

        out = self.fc1(x)

        out = self.relu(out)

        out = self.fc2(out)

        return out

model = MLP().to(device)

criterion = nn.CrossEntropyLoss()

optimizer = optim.SGD(model.parameters(), lr=0.01)

num_epochs = 20000  

losses = []

start_time = time.time()  

for epoch in range(num_epochs):

    outputs = model(X_train) 

    loss = criterion(outputs, y_train)

    optimizer.zero_grad()

    loss.backward()

    optimizer.step()

    if (epoch + 1) % 200 == 0:

        losses.append(loss.item()) 

        print(f'Epoch [{epoch+1}/{num_epochs}], Loss: {loss.item():.4f}')

    if (epoch + 1) % 100 == 0: 

        print(f'Epoch [{epoch+1}/{num_epochs}], Loss: {loss.item():.4f}')

time_all = time.time() - start_time  

print(f'Training time: {time_all:.2f} seconds')

plt.plot(range(len(losses)), losses)

plt.xlabel('Epoch')

plt.ylabel('Loss')

plt.title('Training Loss over Epochs')@浙大疏锦行

plt.show()

版权声明:

本网仅为发布的内容提供存储空间,不对发表、转载的内容提供任何形式的保证。凡本网注明“来源:XXX网络”的作品,均转载自其它媒体,著作权归作者所有,商业转载请联系作者获得授权,非商业转载请注明出处。

我们尊重并感谢每一位作者,均已注明文章来源和作者。如因作品内容、版权或其它问题,请及时与我们联系,联系邮箱:809451989@qq.com,投稿邮箱:809451989@qq.com

热搜词