将填充n个值的形状张量(batch_size,高度,宽度)转换为形状张量(batch_size,n,高度,宽度)的最简单方法是什么? 我在下面创建了解决方案,但看起来有更简便,更快捷的方法 def batch_tensor_to_onehot(tnsr,classes): tnsr = tnsr.unsqueeze(1) res = [] 对于范围(类)中的cls: res.append((tnsr == cls).long()) 返回torch.cat(res,dim = 1)
您可以使用torch.nn.functional.one_hot。 对于您的情况: 一个= torch.nn.functional.one_hot(tnsr,num_classes = classes) out = a.permute(0,3,1,2) | 您还可以使用Tensor.scatter_,它避免了.permute,但是比起@Alpha提出的简单方法,可能更难理解。 def batch_tensor_to_onehot(tnsr,classes): 结果= torch.zeros(tnsr.shape [0],类,* tnsr.shape [1:],dtype = torch.long,device = tnsr.device) result.scatter_(1,tnsr.unsqueeze(1),1) 返回结果 基准测试结果 我很好奇,决定对这三种方法进行基准测试。我发现在批次大小,宽度或高度方面,建议的方法之间似乎没有明显的相对差异。班级数量主要是区别因素。当然,与任何基准里程相同。 使用随机指数并使用批次大小,高度,宽度= 100收集基准。每个实验重复20次,并报告平均值。在进行预热分析之前,先运行num_classes = 100实验。 CPU结果表明,原始方法最适合少于30个左右的num_class,而对于GPU,scatter_方法似乎最快。 在Ubuntu 18.04,NVIDIA 2060 Super,i7-9700K上执行的测试 下面提供了用于基准测试的代码: 进口火炬 从tqdm导入tqdm 导入时间 导入matplotlib.pyplot作为plt def batch_tensor_to_onehot_slavka(tnsr,classes): tnsr = tnsr.unsqueeze(1) res = [] 对于范围(类)中的cls: res.append((tnsr == cls).long()) 返回torch.cat(res,dim = 1) def batch_tensor_to_onehot_alpha(tnsr,classes): 结果= torch.nn.functional.one_hot(tnsr,num_classes = classes) 返回result.permute(0,3,1,2) def batch_tensor_to_onehot_jodag(tnsr,类): 结果= torch.zeros(tnsr.shape [0],类,* tnsr.shape [1:],dtype = torch.long,device = tnsr.device) result.scatter_(1,tnsr.unsqueeze(1),1) 返回结果 def main(): num_classes = [2,10,25,50,100] 高度= 100 宽度= 100 bs = [100] * 20 对于['cpu','cuda']中的d: times_slavka = [] times_alpha = [] times_jodag = [] 预热=真 对于tqdm中的c([num_classes [-1]] + num_classes,ncols = 0): tslavka = 0 talpha = 0 tjodag = 0 对于b中的b: tnsr = torch.randint(c,(b,高度,宽度))。to(device = d) t0 = time.time() y = batch_tensor_to_onehot_slavka(tnsr,c) torch.cuda.synchronize() tslavka + = time.time()-t0 如果不热身: times_slavka.append(tslavka / len(bs)) 对于b中的b: tnsr = torch.randint(c,(b,高度,宽度))。to(device = d) t0 = time.time() y = batch_tensor_to_onehot_alpha(tnsr,c) torch.cuda.synchronize() talpha + = time.time()-t0 如果不热身: times_alpha.append(talpha / len(bs)) 对于b中的b: tnsr = torch.randint(c,(b,高度,宽度))。to(device = d) t0 = time.time() y = batch_tensor_to_onehot_jodag(tnsr,c) torch.cuda.synchronize() tjodag + = time.time()-t0 如果不热身: times_jodag.append(tjodag / len(bs)) 热身=错误 无花果= plt.figure() 斧= fig.subplots() ax.plot(num_classes,times_slavka,label ='Slavka-cat') ax.plot(num_classes,times_alpha,label ='Alpha-one_hot') ax.plot(num_classes,times_jodag,label ='jodag-scatter_') ax.set_xlabel('num_classes') ax.set_ylabel('time(s)') ax.set_title(f'{d}基准') ax.legend() plt.savefig(f'{d} .png') plt.show() 如果__name__ ==“ __main__”: 主要的() | 你的答案 StackExchange.ifUsing(“ editor”,function(){ StackExchange.using(“ externalEditor”,function(){ StackExchange.using(“ snippets”,function(){ StackExchange.snippets.init(); }); }); },“代码段”); StackExchange.ready(function(){ var channelOptions = { 标签:“” .split(“”), id:“ 1” }; initTagRenderer(“”。split(“”),“” .split(“”),channelOptions); StackExchange.using(“ externalEditor”,function(){ //如果启用了摘要,则必须在摘要后触发编辑器 如果(StackExchange.settings.snippets.snippetsEnabled){ StackExchange.using(“ snippets”,function(){ createEditor(); }); } 别的 { createEditor(); } }); 函数createEditor(){ StackExchange.prepareEditor({ useStacksEditor:否, heartbeatType:“答案”, autoActivateHeartbeat:否, convertImagesToLinks:是, noModals:是的, showLowRepImageUploadWarning:是的, 声望:ToPostImages:10, bindNavPrevention:是的, 后缀:“”, imageUploader:{ brandingHtml:“采用\ u003ca href = \“ https://imgur.com/ \” \ u003e \ u003csvg class = \“ svg-icon \” width = \“ 50 \” height = \“ 18 \” viewBox = \“ 0 0 50 18 \” fill = \“ none \” xmlns = \“ http://www.w3.org/2000/svg \” \ u003e \ u003cpath d = \“ M46.1709 9.17788C46.1709 8.26454 46.2665 7.94324 47.1084 7.58816C47.4091 7.46349 47.7169 7.36433 48.0099 7.26993C48.9099 6.97997 49.672 6.73443 49.672 5.93063C49.672 5.22043 48.9832 4.61182 48.1414 4.61182C47.4335 4.61182 46.7256 4.91628 46.0943 5.50789C45.74.6 4.313.6412 4.313.6662 43.1481 6.59048V11.9512C43.1481 13.2535 43.6264 13.8962 44.6595 13.8962C45.6924 13.8962 46.1709 13.253546.1709 11.9512V9.17788Z \“ / \ u003e \ u003cpath d = \” M32.492 10.1419C32.492 12.6954 34.1182 14.0484 37.0451 14.0484C39.9723 14.0484 41.5985 12.6954 41.5985 10.1419V6.59049C41.5985 5.28821 41.1394 4.66232 40.1061 4.66232C 38.5948 5.28821 38.5948 6.59049V9.60062C38.5948 10.8521 38.2696 11.5455 37.0451 11.5455C35.8209 11.5455 35.4954 10.8521 35.4954 9.60062V6.59049C35.4954 5.28821 35.0173 4.66232 34.0034 4.66232C32.9703 4.66232 32.492 5.28821 32.492 6.59049V fill-rule = \“ evenodd \” clip-rule = \“ evenodd \” d = \“ M25.6622 17.6335C27.8049 17.6335 29.3739 16.9402 30.2537 15.6379C30.8468 14.7755 30.9615 13.5579 30.9615 11.9512V6.59049C30.9615 5.28821 30.4833 4.66231 29.4502 4.66231C28.9913 4.66231 28.4555 4.94978 28.1109 5.50789C27.499 4.86533 26.7335 4.56087 25.7005 4.56087C23.1369 4.56087 21.0134 6.57349 21.0134 9.27932C21.0134 11.9852 23.003 13.913 25.3754 13.913C26.5612 13.913 27.4607 13.4902 28.1109 12.7.6346 C28。 1256 12.8854 28.1301 12.9342 28.1301 12.983C28.1301 14.4373 27.2502 15.2321 25.777 15.2321C24.8349 15.2321 24.1352 14.9821 23.5661 14.7787C23.176 14.6393 22.8472 14.5218 22.5437 14.5218C21.7977 14.5218 21.2429 15.0123 21.2429 15.68874.12-3 C24.1317 7.94324 24.9928 7.09766 26.1024 7.09766C27.2119 7.09766 28.0918 7.94324 28.0918 9.27932C28.0918 10.6321 27.2311 11.5116 26.1024 11.5116C24.9737 11.5116 24.1317 10.6491 24.1317 9.27932Z \“ / \ u003e \ u003cpath 11.16 \ C6.8。 8045 13.2535 17.2637 13.8962 18.2965 13.8962C19.3298 13.8962 19.8079 13.2535 19.8079 11.9512V8.12928C19.8079 5.82936 18.4879 4.62866 16.4027 4.62866C15.1594 4.62866 14.279 4.98375 13.3609 5.88013C12.653 5.05154 11.6581 4.628669.357 4.809866 58314 4.9328 7.10506 4.66232 6.51203 4.66232C5.47873 4.66232 5.00066 5.28821 5.00066 6.59049V11.9512C5.00066 13.2535 5.47873 13.8962 6.51203 13.8962C7.54479 13.8962 8.0232 13 .2535 8.0232 11.9512V8.90741C8.0232 7.58817 8.44431 6.91179 9.53458 6.91179C10.5104 6.91179 10.893 7.58817 10.893 8.94108V11.9512C10.893 13.2535 11.3711 13.8962 12.4044 13.8962C13.4375 13.8962 13.9157 13.2535 13.9157 11.9512V8.907417.93.96.9 C16.4027 6.91179 16.8045 7.58817 16.8045 8.94108V11.9512Z \“ / \ u003e \ u003cpath d = \” M3.31675 6.59049C3.31675 5.28821 2.83866 4.66232 1.82471 4.66232C0.791758 4.66232 0.313354 5.28821 0.313354 6.59049V11.9512C0.313354 13.25962 1.82471 13.8962C2.85798 13.8962 3.31675 13.2535 3.31675 11.9512V6.59049Z \“ / \ u003e \ u003cpath d = \” M1.87209 0.400291C0.843612 0.400291 0 1.1159 0 1.98861C0 2.87869 0.822846 3.57676 1.87209 3.57676C2.900561 3.57676 3.7234 2。 C3.7234 1.1159 2.90056 0.400291 1.87209 0.400291Z \“ fill = \”#1BB76E \“ / \ u003e \ u003c / svg \ u003e \ u003c / a \ u003e”, contentPolicyHtml:“根据\ u003ca href = \“ https://stackoverflow.com/help/licensing \” \ u003ecc by-sa \ u003c / a \ u003e \ u003ca href = \“ https://stackoverflow.com获得许可的用户贡献/ legal / content-policy \“ \ u003e(内容策略)\ u003c / a \ u003e”, allowUrls:是 }, onDemand:是的, dispatchSelector:“。discard-answer” ,立即ShowMarkdownHelp:true,enableTables:true,enableSnippets:true }); } }); 感谢您为Stack Overflow提供答案! 请务必回答问题。提供详细信息并分享您的研究! 但是要避免... 寻求帮助,澄清或回答其他答案。 根据意见发表声明;用参考或个人经验来备份它们。 要了解更多信息,请参见我们撰写出色答案的提示。 草稿已保存 草稿丢弃 注册或登录 StackExchange.ready(function(){ StackExchange.helpers.onClickDraftSave('#login-link'); }); 使用Google注册 使用Facebook注册 使用电子邮件和密码注册 提交 以访客身份发布 姓名 电子邮件 必需,但从未显示 StackExchange.ready( 功能 () { StackExchange.openid.initPostLogin('。new-post-login','https%3a%2f%2fstackoverflow.com%2fquestions%2f62245173%2fpytorch-transform-tensor-to-hot-%23new-answer','question_page' ); } ); 以访客身份发布 姓名 电子邮件 必需,但从未显示 发表您的答案 丢弃 点击“发布答案”,即表示您同意我们的服务条款,隐私政策和Cookie政策 不是您要找的答案?浏览标记为python pytorch张量一热编码的其他问题,或询问您自己的问题。