num_features涉及了对什么数据进行处理,但是我总是记不住,写个blog帮助自己理解QAQnn.BatchNorm1d(num_features=1)input tensor:input = torch.tensor([[[1.,2.,3.,4.]],[[0.,0.,0.,0.]]])
print(input.shape)
# torch.Size([2, 1, 4])
nn.BatchNorm1d(num_features=1)函数介绍torch.nn.BatchNorm1d(num_features, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True, device=None, dtype=None)BN1 = nn.BatchNorm1d(num_features=1,affine=False,eps=0)
# input只有1个feature(只有1个channel),每个features的长度=4,第一个batch
print("---BN1---")
print(torch.squeeze(BN1(input)))
eps=0是为了让下图这个batchnorm的公式的这个等于0(起保护作用),eps默认为1e-5
affine=False就可以使用了,注意affine默认为TrueBatchNorm1d要求的[B,C,L]的格式,这里num_features=1与C对应---BN1---
tensor([[-0.1690, 0.5071, 1.1832, 1.8593],[-0.8452, -0.8452, -0.8452, -0.8452]])
nn.BatchNorm1d(num_features=1)复现结果:ans = (input-torch.mean(torch.flatten(input)))/torch.sqrt(torch.var(torch.flatten(input),unbiased=False))
print(torch.squeeze(ans))
torch.flatten()很重要,它刚好体现了:BN层做norm时会把每个feature在不同batch中的值拉平,然后做norm,不管是矩阵还是序列torch.var的参数unbiased=False表示求方差时分母是n,也就是不需要求无偏的方差tensor([[-0.1690, 0.5071, 1.1832, 1.8593],[-0.8452, -0.8452, -0.8452, -0.8452]])
nn.BatchNorm1d(num_features=4)input tensor,和上面一样,复制过来input = torch.tensor([[[1.,2.,3.,4.]],[[0.,0.,0.,0.]]])
print(input.shape)
# torch.Size([2, 1, 4])
nn.BatchNorm1d(num_features=4) 函数介绍BN2 = nn.BatchNorm1d(num_features=4,affine=False,eps=0)
print("---BN2---")
print(BN2(torch.squeeze(input)))
torch.squeeze是必须的,使用之后tensor的shape会从torch.Size([2, 1, 4])变为torch.Size([2, 4]),符合BatchNorm1d要求的[B,C]的格式,这里num_features=4与C对应---BN2---
tensor([[ 1., 1., 1., 1.],[-1., -1., -1., -1.]])
重点来了,我们理解一下num_features=4,对于现在的input data(经过squeeze之后shape为[B,C] = [2,4]),input data的每个feature现在是一个single value值(不是序列或者矩阵),因此这里可以对某个feature手动计算一下:
[4,0],可以计算得mean=2,sqrt(var)=2,因此([4,0]-mean)/sqrt(var)=[1,-1]一模一样
上面的代码:
input = torch.tensor([[[1.,2.,3.,4.]],[[0.,0.,0.,0.]]])
print(input.shape)BN1 = nn.BatchNorm1d(num_features=1,affine=False,eps=0) # 每个features的长度=4,第一个batch
print("---BN1---")
print(torch.squeeze(BN1(input)))
print("---BN1 Repeat---")
ans = (input-torch.mean(torch.flatten(input)))/torch.sqrt(torch.var(torch.flatten(input),unbiased=False) )
print(torch.squeeze(ans))BN2 = nn.BatchNorm1d(num_features=4,affine=False,eps=0)
print("---BN2---")
print(BN2(torch.squeeze(input)))
# BN2就手动算一下啦