Pytorch-IndexError:인덱스의 범위에서 각자

0

질문

나를 구축하기 위해 노력하 LSTM 기반 seq2seq 장 슬롯 솔루션입니다.

예를 들어:

입력:문장 나의 이름은 제임스 본드

출력 슬롯:O O O O O O B-이름이 내 이름

할 수 없어요 그에 대한 이유 아래에 오류가:

IndexError: index out of range in self
> <ipython-input-37-19283c592e18>(12)<module>()
     10     set_trace()
     11     inputs = torch.tensor(training_data[0][0])
---> 12     tag_scores = model(inputs)
     13     print(tag_scores)

하려고 할 때 다음의 활동을 수행하기 위한 코드

class LSTMTagger(nn.Module):

    def __init__(self, embedding_dim, hidden_dim, vocab_size, tagset_size):
        super(LSTMTagger, self).__init__()
        self.hidden_dim = hidden_dim
        self.word_embeddings = nn.Embedding(vocab_size, embedding_dim)
        self.lstm = nn.LSTM(embedding_dim, hidden_dim)
        self.hidden2tag = nn.Linear(hidden_dim, tagset_size)

    def forward(self, sentence):
        embeds = self.word_embeddings(sentence)
        lstm_out, _ = self.lstm(embeds.view(len(sentence), 1, -1))
        tag_space = self.hidden2tag(lstm_out.view(len(sentence), -1))
        tag_scores = F.log_softmax(tag_space, dim=1)
        return tag_scores

model = LSTMTagger( EMBEDDING_DIM, HIDDEN_DIM, len(vocab2sent), len(vocab2slot))
loss_function = nn.NLLLoss()
optimizer = optim.SGD(model.parameters(), lr=0.1)

with torch.no_grad():
    inputs = torch.tensor(training_data[0][0])
    tag_scores = model(inputs)
    print(tag_scores)

for epoch in range(300):
    for sentence, tags in training_data:
        model.zero_grad()
        sentence_in = torch.tensor(sentence, dtype=torch.long)
        targets = torch.tensor(tags, dtype=torch.long)
        tag_scores = model(sentence_in)
        loss = loss_function(sentence_in, targets)
        loss.backward()
        optimizer.step()
with torch.no_grad():
    inputs = prepare_sequence(training_data[0][0], vocab2sent)
    tag_scores = model(inputs)
    print(tag_scores)

나의 변수 값:

vocab2sent - dict with input sentences vocabulary ( word : unique number)
vocab2slot - dict with output vocabulary (slot : unique number)
inputs - tensor([ 229, 1056,  701,  330, 1093,   37,  166,  517, 1150, 1150, 1150, 1150,
        1150, 1150, 1150, 1150, 1150, 1150, 1150, 1150, 1150])
Model value during runtime -
LSTMTagger(
  (word_embeddings): Embedding(1148, 560)
  (lstm): LSTM(560, 560)
  (hidden2tag): Linear(in_features=560, out_features=28, bias=True)
)
deep-learning lstm nlp python
2021-11-21 05:40:07
1

최고의 응답

0

어휘 크기를 포함 층 1148:Embedding(1148,560)하지만 입력 있는 인덱스 1150. 어쩌면 그것은 원본의 문제입니까?

2021-11-21 08:54:00

다른 언어로

이 페이지는 다른 언어로되어 있습니다

Русский
..................................................................................................................
Italiano
..................................................................................................................
Polski
..................................................................................................................
Română
..................................................................................................................
हिन्दी
..................................................................................................................
Français
..................................................................................................................
Türk
..................................................................................................................
Česk
..................................................................................................................
Português
..................................................................................................................
ไทย
..................................................................................................................
中文
..................................................................................................................
Español
..................................................................................................................
Slovenský
..................................................................................................................