12624
21669
將填充n個值的形狀張量(batch_size,高度,寬度)轉換為形狀張量(batch_size,n,高度,寬度)的最簡單方法是什麼?
我在下面創建了解決方案,但看起來有更簡便,更快捷的方法
def batch_tensor_to_onehot(tnsr,classes):
tnsr = tnsr.unsqueeze(1)
res = []
對於範圍(類)中的cls:
res.append((tnsr == cls).long())
返回torch.cat(res,dim = 1) 
您可以使用torch.nn.functional.one_hot。
對於您的情況:
一個= torch.nn.functional.one_hot(tnsr,num_classes = classes)
out = a.permute(0,3,1,2)
|
您還可以使用Tensor.scatter_,它避免了.permute,但是比起@Alpha提出的簡單方法,可能更難理解。
def batch_tensor_to_onehot(tnsr,classes):
結果= torch.zeros(tnsr.shape [0],類,* tnsr.shape [1:],dtype = torch.long,device = tnsr.device)
result.scatter_(1,tnsr.unsqueeze(1),1)
返回結果
基準測試結果
我很好奇,決定對這三種方法進行基準測試。我發現在批次大小,寬度或高度方面,建議的方法之間似乎沒有明顯的相對差異。班級數量主要是區別因素。當然,與任何基準里程相同。
使用隨機指數並使用批處理大小,高度,寬度= 100收集基準。每個實驗重複20次,並報告平均值。在進行預熱分析之前,先運行num_classes = 100實驗。
CPU結果表明,原始方法最適合少於30個左右的num_class,而對於GPU,scatter_方法似乎最快。
在Ubuntu 18.04,NVIDIA 2060 Super,i7-9700K上執行的測試
下面提供了用於基準測試的代碼:
進口火炬
從tqdm導入tqdm
導入時間
導入matplotlib.pyplot作為plt
def batch_tensor_to_onehot_slavka(tnsr,classes):
tnsr = tnsr.unsqueeze(1)
res = []
對於範圍(類)中的cls:
res.append((tnsr == cls).long())
返回torch.cat(res,dim = 1)
def batch_tensor_to_onehot_alpha(tnsr,classes):
結果= torch.nn.functional.one_hot(tnsr,num_classes = classes)
返回result.permute(0,3,1,2)
def batch_tensor_to_onehot_jodag(tnsr,類):
結果= torch.zeros(tnsr.shape [0],類,* tnsr.shape [1:],dtype = torch.long,device = tnsr.device)
result.scatter_(1,tnsr.unsqueeze(1),1)
返回結果
def main():
num_classes = [2,10,25,50,100]
高度= 100
寬度= 100
bs = [100] * 20
對於['cpu','cuda']中的d:
times_slavka = []
times_alpha = []
times_jodag = []
預熱=真
對於tqdm中的c([num_classes [-1]] + num_classes,ncols = 0):
tslavka = 0
talpha = 0
tjodag = 0
對於b中的b:
tnsr = torch.randint(c,(b,高度,寬度))。to(device = d)
t0 = time.time()
y = batch_tensor_to_onehot_slavka(tnsr,c)
torch.cuda.synchronize()
tslavka + = time.time()-t0
如果不熱身:
times_slavka.append(tslavka / len(bs))
對於b中的b:
tnsr = torch.randint(c,(b,高度,寬度))。to(device = d)
t0 = time.time()
y = batch_tensor_to_onehot_alpha(tnsr,c)
torch.cuda.synchronize()
talpha + = time.time()-t0
如果不熱身:
times_alpha.append(talpha / len(bs))
對於b中的b:
tnsr = torch.randint(c,(b,高度,寬度))。to(device = d)
t0 = time.time()
y = batch_tensor_to_onehot_jodag(tnsr,c)
torch.cuda.synchronize()
tjodag + = time.time()-t0
如果不熱身:
times_jodag.append(tjodag / len(bs))
熱身=錯誤
無花果= plt.figure()
斧= fig.subplots()
ax.plot(num_classes,times_slavka,label ='Slavka-cat')
ax.plot(num_classes,times_alpha,label ='Alpha-one_hot')
ax.plot(num_classes,times_jodag,label ='jodag-scatter_')
ax.set_xlabel('num_classes')
ax.set_ylabel('time(s)')
ax.set_title(f'{d}基準')
ax.legend()
plt.savefig(f'{d} .png')
plt.show()
如果__name__ ==“ __main__”:
主要的()
|
你的答案
StackExchange.ifUsing(“ editor”,function(){
StackExchange.using(“ externalEditor”,function(){
StackExchange.using(“ snippets”,function(){
StackExchange.snippets.init();
});
});
},“代碼段”);
StackExchange.ready(function(){
var channelOptions = {
標籤:“” .split(“”),
id:“ 1”
};
initTagRenderer(“”。split(“”),“” .split(“”),channelOptions);
StackExchange.using(“ externalEditor”,function(){
//如果啟用了摘要,則必須在摘要後觸發編輯器
如果(StackExchange.settings.snippets.snippetsEnabled){
StackExchange.using(“ snippets”,function(){
createEditor();
});
}
別的 {
createEditor();
}
});
函數createEditor(){
StackExchange.prepareEditor({
useStacksEditor:否,
heartbeatType:“答案”,
autoActivateHeartbeat:否,
convertImagesToLinks:是,
noModals:是的,
showLowRepImageUploadWarning:是的,
聲望:ToPostImages:10,
bindNavPrevention:是的,
後綴:“”,
imageUploader:{
brandingHtml:“採用\ u003ca href = \“ https://imgur.com/ \” \ u003e \ u003csvg class = \“ svg-icon \” width = \“ 50 \” height = \“ 18 \” viewBox = \“ 0 0 50 18 \” fill = \“ none \” xmlns = \“ http://www.w3.org/2000/svg \” \ u003e \ u003cpath d = \“ M46.1709 9.17788C46.1709 8.26454 46.2665 7.94324 47.1084 7.58816C47.4091 7.46349 47.7169 7.36433 48.0099 7.26993C48.9099 6.97997 49.672 6.73443 49.672 5.93063C49.672 5.22043 48.9832 4.61182 48.1414 4.61182C47.4335 4.61182 46.7256 4.91628 46.0943 5.50789C45.74.3 4.313.6662 43.1481 6.59048V11.9512C43.1481 13.2535 43.6264 13.8962 44.6595 13.8962C45.6924 13.8962 46.1709 13.253546.1709 11.9512V9.17788Z \“ / \ u003e \ u003cpath d = \” M32.492 10.1419C32.492 12.6954 34.1182 14.0484 37.0451 14.0484C39.9723 14.0484 41.5985 12.6954 41.5985 10.1419V6.59049C41.5985 5.28821 41.1394 4.66232 40.1061 4.66232C 38.5948 5.28821 38.5948 6.59049V9.60062C38.5948 10.8521 38.2696 11.5455 37.0451 11.5455C35.8209 11.5455 35.4954 10.8521 35.4954 9.60062V6.59049C35.4954 5.28821 35.0173 4.66232 34.0034 4.66232C32.9703 4.66232 32.492 5.28821 32.492 6.59049V fill-rule = \“ evenodd \” clip-rule = \“ evenodd \” d = \“ M25.6622 17.6335C27.8049 17.6335 29.3739 16.9402 30.2537 15.6379C30.8468 14.7755 30.9615 13.5579 30.9615 11.9512V6.59049C30.9615 5.28821 30.4833 4.66231 29.4502 4.66231C28.9913 4.66231 28.4555 4.94978 28.1109 5.50789C27.499 4.86533 26.7335 4.56087 25.7005 4.56087C23.1369 4.56087 21.0134 6.57349 21.0134 9.27932C21.0134 11.9852 23.003 13.913 25.3754 13.913C26.5612 13.913 27.4607 13.4902 28.1109 12.7.6346 C28。 1256 12.8854 28.1301 12.9342 28.1301 12.983C28.1301 14.4373 27.2502 15.2321 25.777 15.2321C24.8349 15.2321 24.1352 14.9821 23.5661 14.7787C23.176 14.6393 22.8472 14.5218 22.5437 14.5218C21.7977 14.5218 21.2429 15.0123 21.2429 15.68874.12-3 C24.1317 7.94324 24.9928 7.09766 26.1024 7.09766C27.2119 7.09766 28.0918 7.94324 28.0918 9.27932C28.0918 10.6321 27.2311 11.5116 26.1024 11.5116C24.9737 11.5116 24.1317 10.6491 24.1317 9.27932Z \“ / \ u003e \ u003cpath 11.16 \ C6.8。 8045 13.2535 17.2637 13.8962 18.2965 13.8962C19.3298 13.8962 19.8079 13.2535 19.8079 11.9512V8.12928C19.8079 5.82936 18.4879 4.62866 16.4027 4.62866C15.1594 4.62866 14.279 4.98375 13.3609 5.88013C12.653 5.05154 11.6581 4.62866 9.503 4.6.2866 58314 4.9328 7.10506 4.66232 6.51203 4.66232C5.47873 4.66232 5.00066 5.28821 5.00066 6.59049V11.9512C5.00066 13.2535 5.47873 13.8962 6.51203 13.8962C7.54479 13.8962 8.0232 13 .2535 8.0232 11.9512V8.90741C8.0232 7.58817 8.44431 6.91179 9.53458 6.91179C10.5104 6.91179 10.893 7.58817 10.893 8.94108V11.9512C10.893 13.2535 11.3711 13.8962 12.4044 13.8962C13.4375 13.8962 13.9157 13.2535 13.9157 11.9512V8.907417.93.96.9 C16.4027 6.91179 16.8045 7.58817 16.8045 8.94108V11.9512Z \“ / \ u003e \ u003cpath d = \” M3.31675 6.59049C3.31675 5.28821 2.83866 4.66232 1.82471 4.66232C0.791758 4.66232 0.313354 5.28821 0.313354 6.59049V11.9512C0.313354 13.25962 1.82471 13.8962C2.85798 13.8962 3.31675 13.2535 3.31675 11.9512V6.59049Z \“ / \ u003e \ u003cpath d = \” M1.87209 0.400291C0.843612 0.400291 0 1.1159 0 1.98861C0 2.87869 0.822846 3.57676 1.87209 3.57676C2.900561 3.57676 3.7234 2。 C3.7234 1.1159 2.90056 0.400291 1.87209 0.400291Z \“ fill = \”#1BB76E \“ / \ u003e \ u003c / svg \ u003e \ u003c / a \ u003e”,
contentPolicyHtml:“根據\ u003ca href = \“ https://stackoverflow.com/help/licensing \” \ u003ecc by-sa \ u003c / a \ u003e \ u003ca href = \“ https://stackoverflow.com獲得許可的用戶貢獻/ legal / content-policy \“ \ u003e(內容策略)\ u003c / a \ u003e”,
allowUrls:是
},
onDemand:是的,
dispatchSelector:“。discard-answer”
,立即顯示MarkdownHelp:true,enableTables:true,enableSnippets:true
});
}
});
感謝您為Stack Overflow提供答案!
請務必回答問題。提供詳細信息並分享您的研究!
但是要避免...
尋求幫助,澄清或回答其他答案。
根據意見發表聲明;用參考或個人經驗來備份它們。
要了解更多信息,請參見我們撰寫出色答案的提示。
草稿已保存
草稿丟棄
註冊或登錄
StackExchange.ready(function(){
StackExchange.helpers.onClickDraftSave('#login-link');
});
使用Google註冊
使用Facebook註冊
使用電子郵件和密碼註冊
提交
以訪客身份發布
姓名
電子郵件
必需,但從未顯示
StackExchange.ready(
功能 () {
StackExchange.openid.initPostLogin('。new-post-login','https%3a%2f%2fstackoverflow.com%2fquestions%2f62245173%2fpytorch-transform-tensor-to-hot-%23new-answer','question_page' );
}
);
以訪客身份發布
姓名
電子郵件
必需,但從未顯示
發表您的答案
丟棄
點擊“發布答案”,即表示您同意我們的服務條款,隱私政策和Cookie政策
不是您要找的答案?瀏覽標記為python pytorch張量一熱編碼的其他問題,或詢問您自己的問題。