【论文翻译】:Deep Residual Learning for Image Recognition
【论文来源】:Deep Residual Learning for Image Recognition
【翻译人】:BDML@CQUT实验室

Deep Residual Learning for Image Recognition

基于深度残差学习的图像识别

2016 IEEE Conference on Computer Vision and Pattern Recognition
图像识别的深度残差学习2016 IEEE计算机视觉与模式识别会议
Kaiming He Xiangyu Zhang Shaoqing Ren Jian Sun
Microsoft Research
{kahe, v-xiangz, v-shren, jiansun}@microsoft.com

Abstract

Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers.

The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

摘要

更深的神经网络更难训练。我们提出了一个residual learning framework来简化网络的训练,这些网络比以前使用的要深得多。我们显式地将层重新表示为根据层输入学习residual function,而不是学习未引用的函数。我们提供了全面的经验证据表明,这些残差网络更容易优化,并可以获得准确性从相当大的深度。在ImageNet数据集上,我们评估的residual network深度可达152层,比VGG网络深8层,但仍然具有较低的复杂性。这些残余网的集合在ImageNet测试集上的误差达到3.57%,该结果在ILSVRC 2015分类任务中获得第一名。我们还对CIFAR-10进行了100层和1000层的分析。

对于许多视觉识别任务来说,表征的深度是至关重要的。仅仅由于我们非常深入的表示,我们在COCO对象检测数据集上获得了28%的相对改进。深残差网是我们提交给ILSVRC的基础。在2015年COCO竞赛中,我们在ImageNet检测、ImageNet定位、COCO检测、COCO分割任务上也获得了第一名。

1 Introduction

1 引言

Deep convolutional neural networks [22, 21] have led to a series of breakthroughs for image classification [21,49, 39]. Deep networks naturally integrate low/mid/highlevel features [49] and classifiers in an end-to-end multilayer fashion, and the “levels” of features can be enriched by the number of stacked layers (depth). Recent evidence [40, 43] reveals that network depth is of crucial importance, and the leading results [40, 43, 12, 16] on the challenging ImageNet dataset [35] all exploit “very deep” [40] models, with a depth of sixteen [40] to thirty [16]. Many other nontrivial visual recognition tasks [7, 11, 6, 32, 27] have also greatly benefited from very deep models.
深度卷积神经网络[22,21]在图像分类方面取得了一系列突破[21,49,39]。深度网络以端到端的多层方式自然地集成了低/中/高级功能[49]和分类器,并且特征的“层”可以通过堆叠的层数(深度)来丰富。最新证据[40,43]表明了网络深度至关重要,在具有挑战性的ImageNet数据集[35]上的领先结果[40,43,12,16]都采用了“非常深”的模型[40],深度为十六[40]到三十[16]。许多其他重要的视觉识别任务[7,11,6,32,27]也有非常受益于非常深入的模型。

Driven by the significance of depth, a question arises: Is learning better networks as easy as stacking more layers? An obstacle to answering this question was the notorious problem of vanishing /exploding gradients [14, 1, 8], which hamper convergence from the beginning. This problem, however, has been largely addressed by normalized initialization [23, 8, 36, 12] and intermediate normalization layers [16], which enable networks with tens of layers to start converging for stochastic gradient descent (SGD) with backpropagation [22].
在深度意义的驱动下,出现了一个问题:学习更好的网络是否像堆叠更多的层一样容易?回答这个问题的障碍是众所周知的梯度消失/爆炸[14,1,8],从一开始就阻碍了收敛。但是,此问题已通过归一化初始化[23,8,36,12]和中间归一化层[16]得到了很大解决,这使具有数十层的网络能够通过反向传播开始收敛用于随机梯度下降(SGD)[22]。

When deeper networks are able to start converging, a degradation problem has been exposed: with the network depth increasing, accuracy gets saturated (which might be unsurprising) and then degrades rapidly. Unexpectedly, such degradation is not caused by overfitting, and adding more layers to a suitably deep model leads to higher training error, as reported in [10, 41] and thoroughly verified by our experiments. Fig. 1 shows a typical example.
当更深的网络能够开始收敛时,就会暴露出一个退化问题:随着网络深度的增加,准确性达到饱和(这可能不足为奇),然后迅速退化。出乎意料的是,这种退化不是由过度拟合引起的,并且在[10,41]中报道,并由我们的实验充分验证了,将更多层添加到适当深度的模型中会导致更高的训练误差。图1显示了一个典型示例。
在这里插入图片描述
The degradation (of training accuracy) indicates that not all systems are similarly easy to optimize. Let us consider a shallower architecture and its deeper counterpart that adds more layers onto it. There exists a solution by construction to the deeper model: the added layers are identity mapping, and the other layers are copied from the learned shallower model. The existence of this constructed solution indicates that a deeper model should produce no higher training error than its shallower counterpart. But experiments show that our current solvers on hand are unable to find solutions that are compar- ably good or better than the constructed solution(or unable to do so in feasible time).
训练准确性的下降表明并非所有系统都同样容易优化。让我们考虑一个较浅的体系结构及其更深的对应结构,它会在其上添加更多层。通过构建更深层的模型,可以找到一种解决方案:添加的层是恒等映射,而其他层是从学习的浅层模型中复制的。该构造解决方案的存在表明,较深的模型不会产生比浅模型更高的训练误差。但是实验表明,我们现有的求解器无法找到解决方案比构造的解决方案好或更好(或在可行时间内无法做到)。

In this paper, we address the degradation problem by introducing a deep residual learning framework. Instead of hoping each few stacked layers directly fit a desired underlying mapping, we explicitly let these layers fit a residual mapping. Formally, denoting the desired underlying mapping as H(x), we let the stacked nonlinear layers fit another mapping of F(x) := H(x)−x. The original mapping is recast into F(x)+x. We hypothesize that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers.
在本文中,我们通过引入深度残差学习框架来解决退化问题。而不是希望每个堆叠的层都直接适合所需的基础映射,我们明确让这些层适合残差映射。形式上,将所需的基础映射表示为H(x),我们让堆叠的非线性层适合另一映射F(x):=H(x)-x。原始映射将重新转换为F(x)+x。我们假设优化残差映射比优化原始的、未引用的映射更容易。在极端情况下,如果恒等映射是最佳的,则将残差推到零比通过非线性层堆栈拟合恒等映射要容易。

The formulation of F(x)+x can be realized by feedforward neural networks with “shortcut connections” (Fig. 2). Shortcut connections [2, 33, 48] are those skipping one or more layers. In our case, the shortcut connections simply perform identity mapping, and their outputs are added to the outputs of the stacked layers (Fig. 2). Identity shortcut connections add neither extra parameter nor computational complexity. The entire network can still be trained end-to-end by SGD with backpropagation, and can be easily implemented using common libraries (e.g., Caffe [19]) without modifying the solvers.
F(x)+x的公式可通过具有“快捷连接”的前馈神经网络来实现(图2)。快捷连接[2、33、48]是跳过一层或多层的连接。在我们的例子中,快捷连接仅执行恒等映射,并将其输出添加到堆叠层的输出中(图2)。恒等快捷连接既不增加额外的参数,也不增加计算复杂度。整个网络仍然可以通过SGD反向传播进行端到端训练,并且可以使用通用库(例如Caffe [19])轻松实现,而无需修改求解器。
在这里插入图片描述
We present comprehensive experiments on ImageNet [35] to show the degradation problem and evaluate our method. We show that: 1) Our extremely deep residual nets are easy to optimize, but the counterpart “plain” nets (that simply stack layers) exhibit higher training error when the depth increases; 2) Our deep residual nets can easily enjoy accuracy gains from greatly increased depth, producing results substantially better than previous networks.
我们在ImageNet [35]上进行了全面的实验,以显示退化问题并评估我们的方法。我们发现:1)我们极深的残差网络很容易优化,但是当深度增加时,对应的“普通”网络(简单地堆叠层)在深度增加时训练误差较大;2)我们的深层残差网络可以很容易地从深度的大幅增加中获得精度提升,从而产生比以前的网络更好的结果。

Similar phenomena are also shown on the CIFAR-10 set [20], suggesting that the optimization difficulties and the effects of our method are not just akin to a particular dataset. We present successfully trained models on this dataset with over 100 layers, and explore models with over 1000 layers.
在CIFAR-10集上也出现了类似的现象[20],这表明我们的方法的优化困难和效果不仅仅类似于特定数据集。我们在这个数据集上成功地训练了超过100个层的模型,并探索了1000多个层的模型。

On the ImageNet classification dataset [35], we obtain excellent results by extremely deep residual nets. Our 152- layer residual net is the deepest network ever presented on ImageNet, while still having lower complexity than VGG nets [40]. Our ensemble has 3.57% top-5 error on the ImageNet test set, and won the 1st place in the ILSVRC 2015 classification competition. The extremely deep representations also have excellent generalization performance on other recognition tasks, and lead us to further win the 1st places on: ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation in ILSVRC & COCO 2015 competitions. This strong evidence shows that the residual learning principle is generic, and we expect that it is applicable in other vision and non-vision problems.
在ImageNet分类数据集[35]上,我们通过极深的残差网得到了出色的结果。我们的152层残差网络是ImageNet上提出的最深的网络,同时其复杂度仍低于VGG网络[40]。在ImageNet测试集中,我们的集合有3.57%的top-5错误率,并在ILSVRC 2015分大赛中荣获第一名。极深的表征形式在其他识别任务上也具有出色的泛化性能,使我们在ILSVRC和COCO 2015竞赛中进一步赢得了第一名:ImageNet检测,ImageNet定位,COCO检测和COCO分割。这种强有力的证据表明,残留的学习原则是通用的,我们希望它是适用于其他视觉和非视觉问题。
退化问题:随着网络深度的增加,准确率达到饱和然后迅速退化。即网络达到一定层数后继续加深模型会导致模型表现下降。
意外的是,这种退化并不是由过拟合造成的,也不是由梯度消失和爆炸造成的,在一个合理的深度模型中增加更多的层却导致了更高的错误率。

解决思路:一是创造新的优化方法,二是化简现有的优化问题。而论文作者选择了第二种方法,通过让求解更深的神经网络模型变得更容易。

2 RelatedWork

2 相关工作

Residual Representations. In image recognition, VLAD [18] is a representation that encodes by the residual vectors with respect to a dictionary, and Fisher Vector [30] can be formulated as a probabilistic version [18] of VLAD. Both of them are powerful shallow representations for image retrieval and classification [4, 47]. For vector quantization, encoding residual vectors [17] is shown to be more effective than encoding original vectors.
残差表示。 在图像识别中,VLAD [18]是对字典的残差矢量进行编码的表示,Fisher Vector [30]可以表示为VLAD的概率版本[18]。它们都是用于图像检索和分类的有力的浅层表示[4,47]。对于矢量量化,残差矢量[17]进行编码被证明比原始矢量进行编码更有效。

In low-level vision and computer graphics, for solving Partial Differential Equations (PDEs), the widely used Multigrid method [3] reformulates the system as subproblems at multiple scales, where each subproblem is responsible for the residual solution between a coarser and a finer scale. An alternative to Multigrid is hierarchical basis preconditioning [44, 45], which relies on variables that represent residual vectors between two scales. It has been shown [3, 44, 45] that these solvers converge much faster than standard solvers that are unaware of the residual nature of the solutions. These methods suggest that a good reformulation or preconditioning can simplify the optimization.
在低级视觉和计算机图形学中,为了求解偏微分方程(PDEs),广泛使用的多网格方法[3]将系统重新形成为多个尺度的子问题,其中每个子问题负责粗尺度和细尺度之间的残差解。多网格的替代方法是分层基础预处理[44,45],它依赖于表示两个尺度之间残差矢量的变量。已经证明[3,44,45],这些求解器的收敛速度比不知道解决方案残差性质的标准求解器要快得多。这些方法表明,良好的重构或预处理可以简化优化过程。

Shortcut Connections. Practices and theories that lead to shortcut connections [2, 33, 48] have been studied for a long time. An early practice of training multi-layer perceptrons (MLPs) is to add a linear layer connected from the network input to the output [33, 48]. In [43, 24], a few intermediate layers are directly connected to auxiliary classifiers for addressing vanishing/exploding gradients. The papers of [38, 37, 31, 46] propose methods for centering layer responses, gradients, and propagated errors, implemented by shortcut connections. In [43], an “inception” layer is composed of a shortcut branch and a few deeper branches.
快捷连接。 导快捷连接[2、33、48]的实践和理论已经研究了很长时间。训练多层感知器(MLPs)的早期实践是添加从网络输入连接到输出的线性层[33,48]。在[43,24]中,一些中间层直接连接到辅助分类器,以解决梯度消失/爆炸。[38,37,31,46]的论文提出了通过捷径连接实现居中层响应、梯度和传播误差的方法。在[43]中,“起始”层由快捷分支和一些更深的分支组成。

Concurrent with our work, “highway networks” [41, 42] present shortcut connections with gating functions [15]. These gates are data-dependent and have parameters, in contrast to our identity shortcuts that are parameter-free. When a gated shortcut is “closed” (approaching zero), the layers in highway networks represent non-residual functions. On the contrary, our formulation always learns residual functions; our identity shortcuts are never closed, and all information is always passed through, with additional residual functions to be learned. In addition, highway networks have not demonstrated accuracy gains with extremely increased depth (e.g., over 100 layers).
与我们的工作同时,“高速公路网络”[41,42]提供了与门功能[15]的快捷连接。与我们的不带参数的标识快捷方式相反,这些门取决于数据并具有参数。当封闭的快捷方式“关闭”(接近零)时,高速公路网络中的层表示非残差函数。相反,我们的公式总是学习残差函数。我们的标识快捷键永远不会关闭,所有信息始终都会被传递,还需要学习其他残余函数。另外,“高速公路网络”还没有证明深度会大大增加(例如,超过100层)会提高准确性。

3 Deep Residual Learning

3.1 Residual Learning

3.1 残差学习

Let us consider H(x) as an underlying mapping to be fit by a few stacked layers (not necessarily the entire net), with x denoting the inputs to the first of these layers. If one hypothesizes that multiple nonlinear layers can asymptotically approximate complicated functions2, then it is equivalent to hypothesize that they can asymptotically approximate the residual functions, i.e., H(x) − x (assuming that the input and output are of the same dimensions). So rather than expect stacked layers to approximate H(x), we explicitly let these layers approximate a residual function F(x) := H(x) − x. The original function thus becomes F(x)+x. Although both forms should be able to asymptotically approximate the desired functions (as hypothesized), the ease of learning might be different.
让我们将H(x)视为由一些堆叠层(不一定是整个网络)拟合的基础映射,其中x表示这些层中第一层的输入。如果假设多个非线性层可以渐近地逼近复杂函数2,则等效于假设它们可以渐近地近似残差函数,即H(x)-x(假设输入和输出的维数相同)。因此,不是让堆叠的层逼近H(x),而是明确让这些层逼近残差函数F(x):=H(x)-x。因此,原始函数变为F(x)+x。尽管两种形式都应能够渐近地逼近所需的函数(如假设),但学习的难易程度可能有所不同。

This reformulation is motivated by the counterintuitive phenomena about the degradation problem (Fig. 1, left). As we discussed in the introduction, if the added layers can be constructed as identity mappings, a deeper model should have training error no greater than its shallower counterpart. The degradation problem suggests that the solvers might have difficulties in approximating identity mappings by multiple nonlinear layers. With the residual learning reformulation, if identity mappings are optimal, the solvers may simply drive the weights of the multiple nonlinear layers toward zero to approach identity mappings.
关于退化问题的反直觉现象促使这种重新构造(图1,左)。正如我们在引言中所讨论的,如果可以将添加的层构造为标识映射,则较深的模型应具有的训练误差不大于其较浅的模型的训练误差。退化问题表明,求解器在用多个非线性层逼近恒等映射时可能存在困难。通过残差学习的重构,如果恒等映射是最佳的,则求解器可以简单地将多个非线性层的权重趋近于零来逼近恒等映射。

In real cases, it is unlikely that identity mappings are optimal, but our reformulation may help to precondition the problem. If the optimal function is closer to an identity mapping than to a zero mapping, it should be easier for the solver to find the perturbations with reference to an identity mapping, than to learn the function as a new one. We show by experiments (Fig. 7) that the learned residual functions in general have small responses, suggesting that identity mappings provide reasonable preconditioning.
在实际情况下,恒等映射不太可能是最佳的,但是我们的重新构造可能有助于为这个问题提供先决条件。如果最优函数比零映射更接近恒等映射,则求解器参考恒等映射来查找扰动,应该比学习新函数更容易。我们通过实验(图7)表明,所学习的残差函数通常具有较小的响应,这表明恒等映射提供了合理的预处理。

3.2 Identity Mapping by Shortcuts

3.2 通过快捷方式进行恒等映射

We adopt residual learning to every few stacked layers. A building block is shown in Fig. 2. Formally, in this paper we consider a building block defined as:

y = F(x, {Wi}) + x. (1)
Here x and y are the input and output vectors of the layers considered. The function F(x, {Wi}) represents the residual mapping to be learned. For the example in Fig. 2 that has two layers, F = W2σ(W1x) in which σ denotes ReLU [29] and the biases are omitted for simplifying notations. The operation F + x is performed by a shortcut connection and element-wise addition. We adopt the second nonlinearity after the addition (i.e., σ(y), see Fig. 2). 我们对每几个堆叠的层采用残差学习。构建块如图2所示。在形式上,在本文中,我们考虑定义为:
y = F(x, {Wi}) + x. (1)
这里的x和y是所考虑层的输入和输出向量。函数F(x,{Wi})表示要学习的残差映射。对于图2中具有两层的示例,F =W2σ(W1x),其中σ表示为简化符号,省略了ReLU [29]和偏差。F+x操作通过快捷连接和逐元素加法执行。在加法之后我们采用第二个非线性度(即σ(y),见图2)。

The shortcut connections in Eqn.(1) introduce neither extra parameter nor computation complexity. This is not only attractive in practice but also important in our comparisons between plain and residual networks. We can fairly compare plain/residual networks that simultaneously have the same number of parameters, depth, width, and computational cost (except for the negligible element-wise addition).
公式(1)中,快捷连接既没有引入额外的参数,也没有引入计算复杂性。这不仅在实践中具有吸引力,而且在我们比较普通网络和残差网络时也很重要。我们可以公平地比较同时具有相同数量的参数、深度、宽度和计算成本(除了可以忽略的逐元素加法)的普通/残差网络。

The dimensions of x and F must be equal in Eqn.(1). If this is not the case (e.g., when changing the input/output channels), we can perform a linear projection Ws by the shortcut connections to match the dimensions:

y = F(x, {Wi}) +Wsx. (2)
在等式(1)中,x和F的维数必须相等。如果不是这种情况(例如,在更改输入/输出通道时),我们可以通过快捷方式连接执行线性投影Ws以匹配维度:
y = F(x, {Wi}) +Wsx. (2)

We can also use a square matrixWs in Eqn.(1). But we will show by experiments that the identity mapping is sufficient for addressing the degradation problem and is economical, and thus Ws is only used when matching dimensions.
在等式(1)中,我们还可以使用方阵Ws。但是我们将通过实验证明,恒等映射足以解决退化问题并且经济,因此Ws仅在匹配维度时使用。

The form of the residual function F is flexible. Experiments in this paper involve a function F that has two or three layers (Fig. 5), while more layers are possible. But if F has only a single layer, Eqn.(1) is similar to a linear layer: y = W1x+x, for which we have not observed advantages.
残差函数F的形式是灵活的。本文中的实验涉及一个具有两层或三层的函数F(图5),而更多的层是可能的。但是,如果F仅具有一层,则等式(1)类似于线性层:y =W1x + x,对此我们没有观察到优势。

We also note that although the above notations are about fully-connected layers for simplicity, they are applicable to convolutional layers. The function F(x, {Wi}) can represent multiple convolutional layers. The element-wise addition is performed on two feature maps, channel by channel.
我们还注意到,尽管为简化起见,上述符号是关于全连接层的,但它们也适用于卷积层。函数F(x,{Wi})可以表示多个卷积层。在两个特征映射上逐个通道执行逐元素加法。

3.3 Network Architectures

3.3 网络体系结构

We have tested various plain/residual nets, and have observed consistent phenomena. To provide instances for discussion, we describe two models for ImageNet as follows.
Plain Network. Our plain baselines (Fig. 3, middle) are mainly inspired by the philosophy of VGG nets [40] (Fig. 3,left). The convolutional layers mostly have 3×3 filters and follow two simple design rules: (i) for the same output feature map size, the layers have the same number of filters; and (ii) if the feature map size is halved, the number of filters is doubled so as to preserve the time complexity per layer. We perform downsampling directly by convolutional layers that have a stride of 2. The network ends with a global average pooling layer and a 1000-way fully-connected layer with softmax. The total number of weighted layers is 34 in Fig. 3 (middle).
我们已经测试了各种平原/残差网络,并观察到了一致的现象。为了提供讨论实例,我们描述了ImageNet的两个模型,如下所示。
普通网络。 我们简单的基线(图3,中间)主要受到VGG网络原理的启发[40](图3,左)。卷积层通常具有3×3的过滤器,并遵循两个简单的设计规则:(1)对于相同的输出要素图大小,这些层具有相同的过滤器数;(2)如果特征图的大小减半,则过滤器的数量将增加一倍,以保持每层的时间复杂度。我们直接通过步长为2的卷积层执行下采样。网络以全局平均池化层和带有softmax的1000路全连接层结束。图3中的加重层总数为34(中)。
在这里插入图片描述
It is worth noticing that our model has fewer filters and lower complexity than VGG nets [40] (Fig. 3, left). Our 34- layer baseline has 3.6 billion FLOPs (multiply-adds), which is only 18% of VGG-19 (19.6 billion FLOPs).
Residual Network. Based on the above plain network, we insert shortcut connections (Fig. 3, right) which turn the network into its counterpart residual version. The identity shortcuts (Eqn.(1)) can be directly used when the input and output are of the same dimensions (solid line shortcuts in Fig. 3). When the dimensions increase (dotted line shortcuts in Fig. 3), we consider two options: (A) The shortcut still performs identity mapping, with extra zero entries padded for increasing dimensions. This option introduces no extra parameter; (B) The projection shortcut in Eqn.(2) is used to match dimensions (done by 1×1 convolutions). For both options, when the shortcuts go across feature maps of two sizes, they are performed with a stride of 2.
值得注意的是,我们的模型比VGG网络[40]具有更少的过滤器和更低的复杂性(图3,左)。我们的34层基准具有36亿个FLOP(乘加),仅占VGG-19(196亿个FLOP)的18%。
残留网络。 在上面的普通网络的基础上,我们插入快捷方式连接(图3,右),将网络变成其对应的残差版本。当输入和输出的尺寸相同时,可以直接使用标识快捷方式(等式(1))(图3中的实线快捷方式)。当维度增加时(图3中的虚线快捷方式),我们考虑两个选项:(A)快捷方式仍然执行恒等映射,并为增加维度填充了额外的零项填充。此选项不引入任何额外的参数。(B)等式(2)中的投影快捷方式用于匹配维度(按1×1卷积完成)。对于这两个选项,当快捷方式遍历两种维度的特征图时,步幅为2。

3.4 Implementation

3.4 实施

Our implementation for ImageNet follows the practice in [21, 40]. The image is resized with its shorter side randomly sampled in [256, 480] for scale augmentation [40]. A 224×224 crop is randomly sampled from an image or its horizontal flip, with the per-pixel mean subtracted [21]. The standard color augmentation in [21] is used. We adopt batch normalization (BN) [16] right after each convolution and before activation, following [16]. We initialize the weights as in [12] and train all plain/residual nets from scratch. We use SGD with a mini-batch size of 256. The learning rate starts from 0.1 and is divided by 10 when the error plateaus, and the models are trained for up to 60×104 iterations. We use a weight decay of 0.0001 and a momentum of 0.9. We do not use dropout [13], following the practice in [16].
我们对ImageNet的实现遵循[21,40]中的做法。调整图像大小,并在[256,480]中随机采样其较短的一面,以进行比例增强[40]。从图像或其水平翻转中随机采样224×224作物,并减去每像素均值[21]。使用[21]中的标准色彩增强。在每次卷积之后和激活之前,紧接着[16],我们采用批归一化(BN)[16]。我们按照[12]中的方法初始化权重,并从头开始训练所有普通/残差网络。我们使用最小批量为256的SGD。学习率从0.1开始,当误差平稳时除以10,并且对模型进行了多达60×104迭代的训练。我们使用0.0001的权重衰减和0.9的动量。我们不遵循[16]中的做法使用dropout[13]。

In testing, for comparison studies we adopt the standard 10-crop testing [21]. For best results, we adopt the fullyconvolutional form as in [40, 12], and average the scores at multiple scales (images are resized such that the shorter side is in {224, 256, 384, 480, 640}).
在测试中,为了进行比较研究,我们采用了标准的10种作物测试方法[21]。为了获得最佳结果,我们采用[40,12]中的完全卷积形式,并在多个尺度上对分数取平均(图像被调整大小,使得较短的边在{224,256,384,480,640}中)。

4 Experiments

4 实验

4.1 ImageNet Classification

4.1 ImageNet分类

We evaluate our method on the ImageNet 2012 classification dataset [35] that consists of 1000 classes. The models are trained on the 1.28 million training images, and evaluated on the 50k validation images. We also obtain a final result on the 100k test images, reported by the test server. We evaluate both top-1 and top-5 error rates.
Plain Networks. We first evaluate 18-layer and 34-layer plain nets. The 34-layer plain net is in Fig. 3 (middle). The 18-layer plain net is of a similar form. See Table 1 for detailed architectures.
我们在ImageNet 2012分类数据集[35]上评估了我们的方法,该数据集包含1000个类。在128万张训练图像上训练模型,并在50k验证图像上进行评估。我们还将在测试服务器报告的10万张测试图像上获得最终结果。我们评估了top-1和top-5的错误率。
普通网络。 我们首先评估18层和34层普通网络。34层普通网络在图3中(中)。18层普通网络具有类似的形式。有关详细架构,请参见表1。
在这里插入图片描述
The results in Table 2 show that the deeper 34-layer plain net has higher validation error than the shallower 18-layer plain net. To reveal the reasons, in Fig. 4 (left) we compare their training/validation errors during the training procedure. We have observed the degradation problem - the 34-layer plain net has higher training error throughout the whole training procedure, even though the solution space of the 18-layer plain network is a subspace of that of the 34-layer one.
表2中的结果表明,较深的34层普通网络比较浅的18层普通网络具有更高的验证误差。为了揭示原因,在图4(左)中,我们比较了他们在训练过程中的训练/验证错误。我们已经观察到了退化问题,即使18层普通网络的解空间是34层普通网络的子空间,在整个训练过程中34层普通网络具有较高的训练误差。
在这里插入图片描述
We argue that this optimization difficulty is unlikely to be caused by vanishing gradients. These plain networks are trained with BN [16], which ensures forward propagated signals to have non-zero variances. We also verify that the backward propagated gradients exhibit healthy norms with BN. So neither forward nor backward signals vanish. In fact, the 34-layer plain net is still able to achieve competitive accuracy (Table 3), suggesting that the solver works to some extent. We conjecture that the deep plain nets may have exponentially low convergence rates, which impact the reducing of the training error3. The reason for such optimization difficulties will be studied in the future.
我们认为,这种优化困难不太可能是由消失的梯度引起的。这些普通网络使用BN [16]进行训练,可确保前向传播信号具有非零方差。我们还验证了向后传播的梯度具有BN的健康规范。因此,前进或后退信号都不会消失。实际上,34层普通网络仍然可以达到竞争精度(表3),这表明求解器在某种程度上可以工作。我们推测深层的普通网络的收敛速度可能呈指数级降低,这会影响到减少训练误差。将来将研究这种优化困难的原因。

Residual Networks. Next we evaluate 18-layer and 34-layer residual nets (ResNets). The baseline architectures are the same as the above plain nets, expect that a shortcut connection is added to each pair of 3×3 filters as in Fig. 3 (right). In the first comparison (Table 2 and Fig. 4 right), we use identity mapping for all shortcuts and zero-padding for increasing dimensions (option A). So they have no extra parameter compared to the plain counterparts.
残差网络。 接下来,我们评估18层和34层残差网络(ResNets)。基线架构与上述普通网络相同,希望将快捷连接添加到图3(右)中的每对3×3过滤器中。在第一个比较中(表2和图4),我们将恒等映射用于所有短链接,将零填充用于增加维度(选项A)。因此,与普通网络相比,它们没有额外的参数。
在这里插入图片描述
We have three major observations from Table 2 and Fig. 4. First, the situation is reversed with residual learning – the 34-layer ResNet is better than the 18-layer ResNet (by 2.8%). More importantly, the 34-layer ResNet exhibits considerably lower training error and is generalizable to the validation data. This indicates that the degradation problem is well addressed in this setting and we manage to obtain accuracy gains from increased depth.
我们从表2和图4中获得了三个主要观察结果。首先,这种情况通过残差学习得以逆转34层ResNet优于18层ResNet(降低了2.8%)。更重要的是,34层ResNet表现出较低的训练误差,并且可以推广到验证数据。这表明在这种情况下可以很好地解决退化问题,并且我们设法从增加的深度中获得准确性的提高。

Second, compared to its plain counterpart, the 34-layer ResNet reduces the top-1 error by 3.5% (Table 2), resulting from the successfully reduced training error (Fig. 4 right vs. left). This comparison verifies the effectiveness of residual learning on extremely deep systems.
其次,与普通的34层相比ResNet将top-1错误减少了3.5%(表2),这是由于成功减少了训练错误(图4右与左)。这项比较验证了残留学习在极深系统上的有效性。

Last, we also note that the 18-layer plain/residual nets are comparably accurate (Table 2), but the 18-layer ResNet converges faster (Fig. 4 right vs. left). When the net is “not overly deep” (18 layers here), the current SGD solver is still able to find good solutions to the plain net. In this case, the ResNet eases the optimization by providing faster convergence at the early stage.
最后,我们还注意到18层普通/残差网络比较准确(表2),但18层ResNet收敛更快(图4右vs左)。当网络“不是太深”(此处为18层)时,当前的SGD求解器仍然能够为普通找到良好的解决方案。在这种情况下,ResNet通过在早期提供更快的收敛来简化优化。

Identity vs. Projection Shortcuts. We have shown that parameter-free, identity shortcuts help with training. Next we investigate projection shortcuts (Eqn.(2)). In Table 3 we compare three options: (A) zero-padding shortcuts are used for increasing dimensions, and all shortcuts are parameterfree (the same as Table 2 and Fig. 4 right); (B) projection shortcuts are used for increasing dimensions, and other shortcuts are identity; and © all shortcuts are projections.
恒等与投影短链接。 我们已经证明无参数的恒等快捷方式有助于训练。接下来,我们研究投影短链接(等式(2))。在表3中,我们比较了三个选项:(A)零填充短链接用于增加维度,并且所有短链接都是无参数的(与表2和右图4相同);(B)投影短链接用于增加维度,其他短链接是恒等的。(C)所有短链接都是投影。
在这里插入图片描述
Table 3 shows that all three options are considerably better than the plain counterpart. B is slightly better than A.We argue that this is because the zero-padded dimensions in A indeed have no residual learning. C is marginally better than B, and we attribute this to the extra parameters introduced by many (thirteen) projection shortcuts. But the small differences among A/B/C indicate that projection shortcuts are not essential for addressing the degradation problem. So we do not use option C in the rest of this paper, to reduce memory/time complexity and model sizes. Identity shortcuts are particularly important for not increasing the complexity of the bottleneck architectures that are introduced below.
表3显示,所有三个选项都比普通选项好得多。B比A稍好。我们认为这是因为A中的零填充维度确实没有残留学习。C比B好一点,我们将其归因于许多(十三)投影快捷方式引入的额外参数。但是,A/B/C之间的细微差异表明,投影捷径对于解决退化问题并不是必不可少的。因此,在本文的其余部分中,我们不会使用选项C来减少内存/时间的复杂性和模型大小。恒等快捷方式对于不增加下面介绍的瓶颈架构的复杂性特别重要。

Deeper Bottleneck Architectures. Next we describe our deeper nets for ImageNet. Because of concerns on the training time that we can afford, we modify the building block as a bottleneck design4. For each residual function F, we use a stack of 3 layers instead of 2 (Fig. 5). The three layers are 1×1, 3×3, and 1×1 convolutions, where the 1×1 layers are responsible for reducing and then increasing (restoring) dimensions, leaving the 3×3 layer a bottleneck with smaller input/output dimensions. Fig. 5 shows an example, where both designs have similar time complexity.
更深的瓶颈架构。 接下来,我们将介绍ImageNet的更深层网络。由于担心我们可以负担得起的培训时间,因此我们将构建模块修改为瓶颈设计4。对于每个残差函数F,我们使用3层而不是2层的堆栈(图5)。这三个层分别是1×1、3×3和1×1卷积,其中1×1层负责减小然后增加(还原)尺寸,从而使3×3层成为输入/输出尺寸较小的瓶颈。图5显示了一个示例,其中两种设计都具有相似的时间复杂度。
在这里插入图片描述
The parameter-free identity shortcuts are particularly important for the bottleneck architectures. If the identity shortcut in Fig. 5 (right) is replaced with projection, one can show that the time complexity and model size are doubled, as the shortcut is connected to the two high-dimensional ends. So identity shortcuts lead to more efficient models for the bottleneck designs.
无参数标识短链接对于瓶颈体系结构特别重要。如果将图5(右)中的恒等短链接替换为投影,则可以显示时间复杂度和模型大小增加了一倍,因为短链接连接到两个高维端。因此,恒等短链接可以为瓶颈设计提供更有效的模型。

50-layer ResNet: We replace each 2-layer block in the 34-layer net with this 3-layer bottleneck block, resulting in a 50-layer ResNet (Table 1). We use option B for increasing dimensions. This model has 3.8 billion FLOPs.
50层ResNet: 我们替换了具有3层瓶颈块的34层网,形成了50层ResNet(表1)。我们使用选项B来增加尺寸。该模型具有38亿个FLOP。

101-layer and 152-layer ResNets: We construct 101-layer and 152-layer ResNets by using more 3-layer blocks (Table 1). Remarkably, although the depth is significantly increased, the 152-layer ResNet (11.3 billion FLOPs) still has lower complexity than VGG-16/19 nets (15.3/19.6 billion FLOPs).
101层和152层ResNet: 我们通过使用更多的3层块来构建101层和152层ResNet(表1)。值得注意的是,尽管深度显着增加,但152层ResNet(113亿个FLOP)的复杂度仍低于VGG-16/19网(153.96亿个FLOP)。

The 50/101/152-layer ResNets are more accurate than the 34-layer ones by considerable margins (Table 3 and 4). We do not observe the degradation problem and thus enjoy significant accuracy gains from considerably increased depth. The benefits of depth are witnessed for all evaluation metrics (Table 3 and 4).
50/101/152层ResNet比34层ResNet准确度高(表3和表4)。我们没有观察到退化问题,因此深度的增加大大提高了精度。所有评估指标都证明了深度的好处(表3和表4)。
在这里插入图片描述在这里插入图片描述
Comparisons with State-of-the-art Methods. In Table 4 we compare with the previous best single-model results. Our baseline 34-layer ResNets have achieved very competitive accuracy. Our 152-layer ResNet has a single-model top-5 validation error of 4.49%. This single-model result outperforms all previous ensemble results (Table 5). We combine six models of different depth to form an ensemble (only with two 152-layer ones at the time of submitting). This leads to 3.57% top-5 error on the test set (Table 5). This entry won the 1st place in ILSVRC 2015.
与最新方法的比较。 在表4中,我们与以前的最佳单模型的结果进行了比较。我们的基准34层ResNet获得了非常具有竞争力的准确性。我们的152层ResNet的单模型top-5验证错误为4.49%。该单模型的结果优于所有之前的整体结果(表5)。我们将六个不同深度的模型组合在一起,形成一个整体(提交时只有两个152层模型)。这导致测试集上3.5-5的top-5错误(表5)。该作品在ILSVRC 2015中获得第一名。

4.2 CIFAR-10 and Analysis

4.2 CIFAR-10与分析

We conducted more studies on the CIFAR-10 dataset [20], which consists of 50k training images and 10k testing images in 10 classes. We present experiments trained on the training set and evaluated on the test set. Our focus is on the behaviors of extremely deep networks, but not on pushing the state-of-the-art results, so we intentionally use simple architectures as follows.
我们对CIFAR-10数据集[20]进行了更多研究,该数据集包含10个类别的50k训练图像和10k测试图像。我们介绍在训练集上训练的实验,并在测试集上进行评估。我们的重点是极度深度的网络的行为,而不是推动最先进的结果,因此我们有意使用了如下的简单架构。

The plain/residual architectures follow the form in Fig. 3 (middle/right). The network inputs are 32×32 images, with the per-pixel mean subtracted. The first layer is 3×3 convolutions. Then we use a stack of 6n layers with 3×3 convolutions on the feature maps of sizes {32, 16, 8} respectively, with 2n layers for each feature map size. The numbers of filters are {16, 32, 64} respectively. The subsampling is performed by convolutions with a stride of 2. The network ends with a global average pooling, a 10-way fully-connected layer, and softmax. There are totally 6n+2 stacked weighted layers. The following table summarizes the architecture:
普通/残留体系结构遵循图3中的形式(中间/右侧)。网络输入为32×32图像,每像素均值被减去。第一层是3×3卷积。然后,我们分别在大小为{32,16,8}的特征图上使用具有3×3卷积的6n层堆栈,每个特征图尺寸为2n层。过滤器的数量分别为{16,32,64}。二次采样通过步幅为2的卷积执行。网络以全局平均池,10路全连接层和softmax结尾。总共有6n +2个堆叠的加权层。下表总结了体系结构:
在这里插入图片描述
When shortcut connections are used, they are connected to the pairs of 3×3 layers (totally 3n shortcuts). On this dataset we use identity shortcuts in all cases (i.e., option A), so our residual models have exactly the same depth, width, and number of parameters as the plain counterparts.
使用快捷方式连接时,它们连接到成对的3×3层对(总共3n个快捷方式)。在此数据集上,我们在所有情况下都使用身份快捷方式(即选项A),因此我们的残差模型的深度,宽度和参数数量与普通模型完全相同。

We use a weight decay of 0.0001 and momentum of 0.9,and adopt the weight initialization in [12] and BN [16] but with no dropout. These models are trained with a minibatch size of 128 on two GPUs. We start with a learning rate of 0.1, divide it by 10 at 32k and 48k iterations, and terminate training at 64k iterations, which is determined on a 45k/5k train/val split. We follow the simple data augmentation in [24] for training: 4 pixels are padded on each side, and a 32×32 crop is randomly sampled from the padded image or its horizontal flip. For testing, we only evaluate the single view of the original 32×32 image.
我们使用0.0001的权重衰减和0.9的动量,并在[12]和BN [16]中采用权重初始化,但是没有丢失。这些模型在两个GPU上以最小批量为128进行训练。我们从0.1的学习率开始,在32k和48k迭代中将其除以10,然后在64k迭代中终止训练,这是由45k/5k的火车/ val分配决定的。我们按照[24]中的简单数据增强进行训练:在每侧填充4个像素,从填充的图像或其水平翻转中随机抽取3 2×32的农作物。为了进行测试,我们仅评估原始32×32图像的单个视图。

We compare n = {3, 5, 7, 9}, leading to 20, 32, 44, and 56-layer networks. Fig. 6 (left) shows the behaviors of the plain nets. The deep plain nets suffer from increased depth, and exhibit higher training error when going deeper. This phenomenon is similar to that on ImageNet (Fig. 4, left) and on MNIST (see [41]), suggesting that such an optimization difficulty is a fundamental problem.
我们比较n={3,5,7,9},得出20、32、44和56层网络。图6(左)显示了普通网络的行为。较深的平原网会增加深度,并且在深入时会表现出较高的训练误差。这种现象类似于ImageNet(图4,左)和MNIST(参见[41])上的现象,表明这种优化困难是一个基本问题。
在这里插入图片描述
We further explore n = 18 that leads to a 110-layer ResNet. In this case, we find that the initial learning rate of 0.1 is slightly too large to start converging5. So we use 0.01 to warm up the training until the training error is below 80% (about 400 iterations), and then go back to 0.1 and continue training. The rest of the learning schedule is as done previously. This 110-layer network converges well (Fig. 6, middle). It has fewer parameters than other deep and thin networks such as FitNet [34] and Highway [41] (Table 6),yet is among the state-of-the-art results (6.43%, Table 6).
我们进一步探索n=18导致110层ResNet。在这种情况下,我们发现初始学习速率0.1太大,无法开始收敛5。因此,我们使用0.01来预热训练,直到训练误差低于80%(约400次迭代),然后返回0.1并继续训练。其余的学习时间表与之前一样。这个110层的网络可以很好地融合(图6,中间)。它的参数比其他任何参数都少网络,例如FitNet [34]和Highway [41](表6),但仍处于最新结果之中(6.43%,表6)。
在这里插入图片描述
Analysis of Layer Responses. Fig. 7 shows the standard deviations (std) of the layer responses. The responses are the outputs of each 3×3 layer, after BN and before other nonlinearity (ReLU/addition). For ResNets, this analysis reveals the response strength of the residual functions. Fig. 7 shows that ResNets have generally smaller responses than their plain counterparts. These results support our basic motivation (Sec.3.1) that the residual functions might be generally closer to zero than the non-residual functions. We also notice that the deeper ResNet has smaller magnitudes of responses, as evidenced by the comparisons among ResNet-20, 56, and 110 in Fig. 7. When there are more layers, an individual layer of ResNets tends to modify the signal less.
层响应分析。 图7显示了层响应的标准偏差(std)。响应是BN之后以及其他非线性(ReLU /加法)之前每个3×3层的输出。对于ResNet,此分析揭示了残差函数的响应强度。图7显示ResNet的响应通常比普通响应小。这些结果支持我们的基本动机(第3.1节),即与非残差函数相比,残差函数通常可能更接近于零。我们还注意到,较深的ResNet具有较小的响应幅度,如图7中ResNet-20、56和110之间的比较所证明的。当有更多层时,ResNets的单个层往往会修改信号较少。
在这里插入图片描述
Exploring Over 1000 layers. We explore an aggressively deep model of over 1000 layers. We set n = 200 that leads to a 1202-layer network, which is trained as described above. Our method shows no optimization difficulty, and this 103-layer network is able to achieve training error <0.1% (Fig. 6, right). Its test error is still fairly good (7.93%, Table 6).
探索超过1000层。 我们探索了一个超过1000层的深度模型。我们将n设置为200,这将导致1202层网络的运行,如上所述。我们的方法没有优化困难,该103层网络能够实现训练误差<0.1%(图6,右)。其测试误差仍然相当不错(7.93%,表6)。

But there are still open problems on such aggressively deep models. The testing result of this 1202-layer network is worse than that of our 110-layer network, although both have similar training error. We argue that this is because of overfitting. The 1202-layer network may be unnecessarily large (19.4M) for this small dataset. Strong regularization such as maxout [9] or dropout [13] is applied to obtain the best results ([9, 25, 24, 34]) on this dataset. In this paper, we use no maxout/dropout and just simply impose regularization via deep and thin architectures by design, without distracting from the focus on the difficulties of optimization. But combining with stronger regularization may improve results, which we will study in the future.
但是,在如此积极的深度模型上仍然存在未解决的问题。尽管这1202层网络的测试结果都比我们110层网络的测试结果差有类似的训练错误。我们认为这是由于过度拟合。对于这个小的数据集,1202层网络可能会不必要地大(19.4M)。使用强正则化(例如maxout [9]或dropout [13])可以在此数据集上获得最佳结果([9,25,24,34])。在本文中,我们不使用maxout/dropout,而只是通过设计通过深度和精简架构强加正则化,而不会分散对优化困难的关注。但是,结合更强的正则化可能会改善结果,我们将在以后进行研究。

4.3 Object Detection on PASCAL and MS COCO

4.3 基于PASCAL和MS-COCO的目标检测

Our method has good generalization performance on other recognition tasks. Table 7 and 8 show the object detection baseline results on PASCAL VOC 2007 and 2012 [5] and COCO [26]. We adopt Faster R-CNN [32] as the detection method. Here we are interested in the improvements of replacing VGG-16 [40] with ResNet-101. The detection implementation (see appendix) of using both models is the same, so the gains can only be attributed to better networks. Most remarkably, on the challenging COCO dataset we obtain a 6.0% increase in COCO’s standard metric (mAP@[.5,.95]), which is a 28% relative improvement. This gain is solely due to the learned representations.
我们的方法在其他识别任务上具有良好的泛化性能。表7和8显示了PASCAL VOC 2007和2012 [5]和COCO [26]上的对象检测基线结果。我们采用Faster R-CNN [32]作为检测方法。在这里,我们对用ResNet-101替换VGG-16 [40]的改进感兴趣。使用这两种模型的检测实现方式(请参阅附录)是相同的,因此只能将收益归因于更好的网络。最值得注意的是,在具有挑战性的COCO数据集上,我们的COCO标准指标(mAP @ [.5,.95])增加了6.0%,相对提高了28%。该收益完全归因于所学的表示。
在这里插入图片描述
Based on deep residual nets, we won the 1st places in several tracks in ILSVRC & COCO 2015 competitions: ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation. The details are in the appendix.
基于深层残差网络,我们在ILSVRC和COCO 2015竞赛的多个赛道上均获得了第一名:ImageNet检测,ImageNet本地化,COCO检测和COCO分割。详细信息在附录中。

残差网络事实上是由多个浅的网络融合而成,它没有在根本上解决消失的梯度问题,只是避免了消失的梯度问题,因为它是由多个浅的网络融合而成,浅的网络在训练时不会出现消失的梯度问题,所以它能够加速网络的收敛.

查看全文
如若内容造成侵权/违法违规/事实不符,请联系编程学习网邮箱:809451989@qq.com进行投诉反馈,一经查实,立即删除!

相关文章

  1. 生产环境中的kubernetes 优先级与抢占

    kubernetes 中的抢占功能是调度器比较重要的feature,但是真正使用起来还是比较危险,否则很容易把低优先级的pod给无辜kill。为了提高GPU集群的资源利用率,决定勇于尝试一番该featrue。当然使用之前还是得阅读一下相关的代码做到心里有数,出了问题也方便定位修复。基本原理优…...

    2024/5/7 1:25:33
  2. 利用Python编写一个简单的图书管理系统

    完成项目的流程:想要完成某个项目基本都是按这个流程进行编写 想要实现的功能代码的实现 class Book:def __init__(self,name,author,recommendation,state=0):self.name = nameself.author = authorself.recommendation = recommendationself.state = statedef __str__(self)…...

    2024/4/29 23:21:38
  3. H264---帧间编码---宏块的帧间预测解码

    链接地址:H.264/AVC视频编解码技术详解 GitHub代码地址 一、帧间预测宏块类型 在讨论帧内预测的章节中我们已经讨论过部分宏块类型的分类。我们知道,对于帧间编码的宏块,其划分方式可以分为两步,其一为宏块级划分,其二为子宏块级划分。下面分别讨论这两个步骤。 1.1 帧间预…...

    2024/4/29 23:21:35
  4. 软件测试--网络基础

    目标OSI七层模型及数据传输过程 网络层协议重点协议–ARP,IP ,路由协议 传输层重点协议–TCP,UDP 应用层重点协议–HTTP 应用层重点协议–HTTPS 抓包工具Wireshark的使用OSI 七层模型TCP/IP四层模型&&OSI网络七层模型 数据链路层—物理层、数据链路层 网络层—网络层…...

    2024/5/7 23:26:22
  5. Colored Rectangles【CF1398 D】【DP】

    题目链接CF-1398D Educational Codeforces Round 93有三种颜色,R、G、B,我们要用他们来构成矩形的长和宽,要求是矩阵的长和宽不能是同一种颜色,也就是一个矩阵必须是两个不同颜色来构成的。并且每个颜色对应的值只能用一次。于是,就有贪心策略,肯定是要让权值大的尽量和权…...

    2024/5/7 23:29:42
  6. Educational Codeforces Round 93 (Rated for Div. 2)补题

    比赛链接 C 题意: 问一串数字中有多少子串的长度等于串中所有数字之和 思路: 不会做,我只能想两重循环,一个数一个数的处理,时间复杂度1010,超时。 其实可以不用一个数一个数的处理,只需要用一个map。 因为如果要满足题目中的条件的话,子串中 sum/len=1sum/len=1sum/le…...

    2024/4/29 23:21:22
  7. springcloud alibaba——@sentinelResource注解的使用(全局)

    传统@sentinelResource注解使用的缺陷: 1、系统默认的,没有体现我们自己的业务要求。 2、依照现有条件,我们自定义的处理方法又和业务代码耦合在一块,不直观。 3、每个业务方法都添加一个兜底的,那代码膨胀加剧。 4、全局统一的处理方法没有体现。 解决办法:定义一个兜底…...

    2024/4/29 23:21:20
  8. 【目标检测】论文笔记

    Arbitrary-Oriented Object Detection with Circular Smooth Label(2020)圆形平滑标签(CSL,circular smooth label)。 旋转检测器(the rotation detector):提供精确的方向和比例信息,在诸如航空图像中的对象变化检测以及多方向场景文本的顺序字符识别等应用中将很有帮…...

    2024/4/30 11:54:15
  9. Ubuntu 18.04 mysql自动备份

    创建sh文件rq=`date +%Y%m%d`mysqldump -h127.0.0.1 -uroot -proot > /home/cvlab_web/WebServer_latest/mysql/worklog$rq.sql其中rq为获取当前日期,-u后为数据库用户名,-p后为数据库密码,“>”后为存储的sql文件路径 设置定时执行 编辑/etc/crontab,添加如下代码0…...

    2024/4/30 11:53:53
  10. Scala 的Collection基础操作

    一、Array 1、不可变数组(Array)// <1>、--=== 声明并初始化一个不可变数组 ===--// <1.1>、静态的声明并初始化一个不可变数组var array1 = Array(1, 2, 3, 4, 5)println(s"array1= ${array1.mkString("[", ",", "]")}&q…...

    2024/4/29 1:17:42
  11. A fast approach to global alignment of protein-protein interaction networks

    摘要背景:全球网络比对已被认为是计算功能正畸的有效工具。常用的全局比对技术(如IsoRank)依赖于两步过程:第一步是基于迭代扩散的方法,用于将相似性分数分配给所有可能的节点对(匹配);第二步将最大权重二部匹配算法应用于该相似性分数矩阵,以识别同源节点对。虽然在识别基…...

    2024/4/29 1:17:38
  12. Windows Style Builder学习制作笔记(二)

    目标:制作资源管理器(文件夹)详细信息窗格背景 提醒:制作之前一定要先记录系统还原点!!! 提醒:制作之前一定要先记录系统还原点!!! 提醒:制作之前一定要先记录系统还原点!!! 效果图:注意:win10的详细信息窗口显示在右边(win7显示在下方),可以用OldNewExplo…...

    2024/4/29 23:21:15
  13. vue中webpack配置理解

    webpack: 本质上,webpack 是一个现代 JavaScript 应用程序的静态模块打包器(module bundler)。当 webpack 处理应用程序时,它会递归地构建一个依赖关系图(dependency graph),其中包含应用程序需要的每个模块,然后将所有这些模块打包成一个或多个 bundle。 这是官网上的解析…...

    2024/4/29 23:21:09
  14. C语言实现常用数据结构:栈-顺序栈实现(第6篇)

    栈 栈是一种特殊的线性表,其特性是仅能在表尾进行插入或删除的操作,栈中元素的操作是按照后进先出的原则进行,因此栈又称为后进先出线性表(Last In First Out,LIFO数据结构)。栈顶:表尾端称为栈顶栈底:表头端称为栈底空栈:不含任何元素的空表称为空栈出栈:从栈中删除…...

    2024/4/29 23:21:20
  15. 实习生面试顺利入职 面经整理(第二部分)

    以下时准备实习面试时整理的面试题第二部分,大家慢慢消化,第一部分也可点[面试题整理及凉经分享],收藏再看,不要收藏吃灰哟!(https://blog.csdn.net/qq_41570843/article/details/108019015) 第四天整理(7.16) 1、集合类不安全问题 ArrayList为什么是线程不安全 public cl…...

    2024/4/29 23:21:02
  16. 第九次作业

    第九次作业 1、OSI七层模型,描述每层的功能 应用层:为应用程序提供网络服务 表示层:数据格式标准化,加密,解密 会话层:建立、维护、管理会话链接 传输层:建立、维护、管理端到端间的链接 网络层:IP寻找和路由选择 数据链路层:管理网络层和物理层之间的通信 物理层:使…...

    2024/5/6 9:16:09
  17. leetcode-61. 旋转链表

    题目 给定一个链表,旋转链表,将链表每个节点向右移动 k 个位置,其中 k 是非负数。 示例 1: 输入: 1->2->3->4->5->NULL, k = 2 输出: 4->5->1->2->3->NULL 解释: 向右旋转 1 步: 5->1->2->3->4->NULL 向右旋转 2 步: 4->5-…...

    2024/5/4 6:26:44
  18. java笔记14 访问控制 内部类 包装 Object中常用的方法

    1、访问控制对象中的属性和方法,是可以根据指定的修饰符来进行访问的。1.1、概述 类中的属性和方法的四种访问控制:public,公共的,在所有地方都可以访问 protected,受保护的,当前类中,子类中,同一个包中其他类可以访问,不同包中子类,并且创建子类类型的对象,通过子类…...

    2024/4/29 23:20:51
  19. Java map

    原本以为Java定义的数据结构都是继承于Collection接口,原来map并没有在其中,有单独的Map接口。 java.util.Collection的API中是这样介绍的‘The root interface in the collection hierarchy. A collection represents a group of objects, known as its elements.’顶层接口…...

    2024/4/29 23:20:46
  20. Lua笔记

    LuaState ●LuaState封装了对lua 主要数据结构 lua_State 指针的各种堆栈操作。 ●一般对于客户端,推荐只创建一个LuaState对象。如果要使用多State需要在Unity中设置全局宏 MULTI_STATE ●LuaState.Start 需要在tolua代码加载到内存后调用。如果使用assetbunblde加载lua文件,…...

    2024/4/29 23:20:41

最新文章

  1. PE文件(四)FileBuffer-ImageBuffer作业

    C语言实现如下功能 2.编写一个函数&#xff0c;将RVA的值转换成FOA 将文件加载到内存时&#xff0c;已知一个数据在内存中的地址&#xff0c;将此地址转化成文件在硬盘上时的相对于文件起始地址的文件偏移地址。即将虚拟内存偏移地址转换成文件偏移地址。 说明&#xff1a;这里…...

    2024/5/8 1:18:31
  2. 梯度消失和梯度爆炸的一些处理方法

    在这里是记录一下梯度消失或梯度爆炸的一些处理技巧。全当学习总结了如有错误还请留言&#xff0c;在此感激不尽。 权重和梯度的更新公式如下&#xff1a; w w − η ⋅ ∇ w w w - \eta \cdot \nabla w ww−η⋅∇w 个人通俗的理解梯度消失就是网络模型在反向求导的时候出…...

    2024/5/7 10:36:02
  3. 【python】Flask Web框架

    文章目录 WSGI(Web服务器网关接口)示例Web应用程序Web框架Flask框架创建项目安装Flask创建一个基本的 Flask 应用程序调试模式路由添加变量构造URLHTTP方法静态文件模板—— Jinja2模板文件(Template File)<...

    2024/5/5 8:46:38
  4. Python语法总结:not(常出现错误)

    0、not是什么 在python中not是逻辑判断词&#xff0c;用于布尔型True和False之前 a not Ture # a False b not False # b True1、not的用法 &#xff08;1&#xff09;判断语句 if not a:# 如果a是False&#xff0c;执行的语句&#xff08;2&#xff09;判断元素是否在…...

    2024/5/7 15:29:37
  5. 416. 分割等和子集问题(动态规划)

    题目 题解 class Solution:def canPartition(self, nums: List[int]) -> bool:# badcaseif not nums:return True# 不能被2整除if sum(nums) % 2 ! 0:return False# 状态定义&#xff1a;dp[i][j]表示当背包容量为j&#xff0c;用前i个物品是否正好可以将背包填满&#xff…...

    2024/5/7 19:05:20
  6. 【Java】ExcelWriter自适应宽度工具类(支持中文)

    工具类 import org.apache.poi.ss.usermodel.Cell; import org.apache.poi.ss.usermodel.CellType; import org.apache.poi.ss.usermodel.Row; import org.apache.poi.ss.usermodel.Sheet;/*** Excel工具类** author xiaoming* date 2023/11/17 10:40*/ public class ExcelUti…...

    2024/5/7 22:31:36
  7. Spring cloud负载均衡@LoadBalanced LoadBalancerClient

    LoadBalance vs Ribbon 由于Spring cloud2020之后移除了Ribbon&#xff0c;直接使用Spring Cloud LoadBalancer作为客户端负载均衡组件&#xff0c;我们讨论Spring负载均衡以Spring Cloud2020之后版本为主&#xff0c;学习Spring Cloud LoadBalance&#xff0c;暂不讨论Ribbon…...

    2024/5/6 23:37:19
  8. TSINGSEE青犀AI智能分析+视频监控工业园区周界安全防范方案

    一、背景需求分析 在工业产业园、化工园或生产制造园区中&#xff0c;周界防范意义重大&#xff0c;对园区的安全起到重要的作用。常规的安防方式是采用人员巡查&#xff0c;人力投入成本大而且效率低。周界一旦被破坏或入侵&#xff0c;会影响园区人员和资产安全&#xff0c;…...

    2024/5/7 14:19:30
  9. VB.net WebBrowser网页元素抓取分析方法

    在用WebBrowser编程实现网页操作自动化时&#xff0c;常要分析网页Html&#xff0c;例如网页在加载数据时&#xff0c;常会显示“系统处理中&#xff0c;请稍候..”&#xff0c;我们需要在数据加载完成后才能继续下一步操作&#xff0c;如何抓取这个信息的网页html元素变化&…...

    2024/5/7 0:32:52
  10. 【Objective-C】Objective-C汇总

    方法定义 参考&#xff1a;https://www.yiibai.com/objective_c/objective_c_functions.html Objective-C编程语言中方法定义的一般形式如下 - (return_type) method_name:( argumentType1 )argumentName1 joiningArgument2:( argumentType2 )argumentName2 ... joiningArgu…...

    2024/5/7 16:57:02
  11. 【洛谷算法题】P5713-洛谷团队系统【入门2分支结构】

    &#x1f468;‍&#x1f4bb;博客主页&#xff1a;花无缺 欢迎 点赞&#x1f44d; 收藏⭐ 留言&#x1f4dd; 加关注✅! 本文由 花无缺 原创 收录于专栏 【洛谷算法题】 文章目录 【洛谷算法题】P5713-洛谷团队系统【入门2分支结构】&#x1f30f;题目描述&#x1f30f;输入格…...

    2024/5/7 14:58:59
  12. 【ES6.0】- 扩展运算符(...)

    【ES6.0】- 扩展运算符... 文章目录 【ES6.0】- 扩展运算符...一、概述二、拷贝数组对象三、合并操作四、参数传递五、数组去重六、字符串转字符数组七、NodeList转数组八、解构变量九、打印日志十、总结 一、概述 **扩展运算符(...)**允许一个表达式在期望多个参数&#xff0…...

    2024/5/7 1:54:46
  13. 摩根看好的前智能硬件头部品牌双11交易数据极度异常!——是模式创新还是饮鸩止渴?

    文 | 螳螂观察 作者 | 李燃 双11狂欢已落下帷幕&#xff0c;各大品牌纷纷晒出优异的成绩单&#xff0c;摩根士丹利投资的智能硬件头部品牌凯迪仕也不例外。然而有爆料称&#xff0c;在自媒体平台发布霸榜各大榜单喜讯的凯迪仕智能锁&#xff0c;多个平台数据都表现出极度异常…...

    2024/5/7 21:15:55
  14. Go语言常用命令详解(二)

    文章目录 前言常用命令go bug示例参数说明 go doc示例参数说明 go env示例 go fix示例 go fmt示例 go generate示例 总结写在最后 前言 接着上一篇继续介绍Go语言的常用命令 常用命令 以下是一些常用的Go命令&#xff0c;这些命令可以帮助您在Go开发中进行编译、测试、运行和…...

    2024/5/7 0:32:51
  15. 用欧拉路径判断图同构推出reverse合法性:1116T4

    http://cplusoj.com/d/senior/p/SS231116D 假设我们要把 a a a 变成 b b b&#xff0c;我们在 a i a_i ai​ 和 a i 1 a_{i1} ai1​ 之间连边&#xff0c; b b b 同理&#xff0c;则 a a a 能变成 b b b 的充要条件是两图 A , B A,B A,B 同构。 必要性显然&#xff0…...

    2024/5/7 16:05:05
  16. 【NGINX--1】基础知识

    1、在 Debian/Ubuntu 上安装 NGINX 在 Debian 或 Ubuntu 机器上安装 NGINX 开源版。 更新已配置源的软件包信息&#xff0c;并安装一些有助于配置官方 NGINX 软件包仓库的软件包&#xff1a; apt-get update apt install -y curl gnupg2 ca-certificates lsb-release debian-…...

    2024/5/7 16:04:58
  17. Hive默认分割符、存储格式与数据压缩

    目录 1、Hive默认分割符2、Hive存储格式3、Hive数据压缩 1、Hive默认分割符 Hive创建表时指定的行受限&#xff08;ROW FORMAT&#xff09;配置标准HQL为&#xff1a; ... ROW FORMAT DELIMITED FIELDS TERMINATED BY \u0001 COLLECTION ITEMS TERMINATED BY , MAP KEYS TERMI…...

    2024/5/6 19:38:16
  18. 【论文阅读】MAG:一种用于航天器遥测数据中有效异常检测的新方法

    文章目录 摘要1 引言2 问题描述3 拟议框架4 所提出方法的细节A.数据预处理B.变量相关分析C.MAG模型D.异常分数 5 实验A.数据集和性能指标B.实验设置与平台C.结果和比较 6 结论 摘要 异常检测是保证航天器稳定性的关键。在航天器运行过程中&#xff0c;传感器和控制器产生大量周…...

    2024/5/7 16:05:05
  19. --max-old-space-size=8192报错

    vue项目运行时&#xff0c;如果经常运行慢&#xff0c;崩溃停止服务&#xff0c;报如下错误 FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory 因为在 Node 中&#xff0c;通过JavaScript使用内存时只能使用部分内存&#xff08;64位系统&…...

    2024/5/7 0:32:49
  20. 基于深度学习的恶意软件检测

    恶意软件是指恶意软件犯罪者用来感染个人计算机或整个组织的网络的软件。 它利用目标系统漏洞&#xff0c;例如可以被劫持的合法软件&#xff08;例如浏览器或 Web 应用程序插件&#xff09;中的错误。 恶意软件渗透可能会造成灾难性的后果&#xff0c;包括数据被盗、勒索或网…...

    2024/5/6 21:25:34
  21. JS原型对象prototype

    让我简单的为大家介绍一下原型对象prototype吧&#xff01; 使用原型实现方法共享 1.构造函数通过原型分配的函数是所有对象所 共享的。 2.JavaScript 规定&#xff0c;每一个构造函数都有一个 prototype 属性&#xff0c;指向另一个对象&#xff0c;所以我们也称为原型对象…...

    2024/5/7 11:08:22
  22. C++中只能有一个实例的单例类

    C中只能有一个实例的单例类 前面讨论的 President 类很不错&#xff0c;但存在一个缺陷&#xff1a;无法禁止通过实例化多个对象来创建多名总统&#xff1a; President One, Two, Three; 由于复制构造函数是私有的&#xff0c;其中每个对象都是不可复制的&#xff0c;但您的目…...

    2024/5/7 7:26:29
  23. python django 小程序图书借阅源码

    开发工具&#xff1a; PyCharm&#xff0c;mysql5.7&#xff0c;微信开发者工具 技术说明&#xff1a; python django html 小程序 功能介绍&#xff1a; 用户端&#xff1a; 登录注册&#xff08;含授权登录&#xff09; 首页显示搜索图书&#xff0c;轮播图&#xff0…...

    2024/5/7 0:32:47
  24. 电子学会C/C++编程等级考试2022年03月(一级)真题解析

    C/C++等级考试(1~8级)全部真题・点这里 第1题:双精度浮点数的输入输出 输入一个双精度浮点数,保留8位小数,输出这个浮点数。 时间限制:1000 内存限制:65536输入 只有一行,一个双精度浮点数。输出 一行,保留8位小数的浮点数。样例输入 3.1415926535798932样例输出 3.1…...

    2024/5/7 17:09:45
  25. 配置失败还原请勿关闭计算机,电脑开机屏幕上面显示,配置失败还原更改 请勿关闭计算机 开不了机 这个问题怎么办...

    解析如下&#xff1a;1、长按电脑电源键直至关机&#xff0c;然后再按一次电源健重启电脑&#xff0c;按F8健进入安全模式2、安全模式下进入Windows系统桌面后&#xff0c;按住“winR”打开运行窗口&#xff0c;输入“services.msc”打开服务设置3、在服务界面&#xff0c;选中…...

    2022/11/19 21:17:18
  26. 错误使用 reshape要执行 RESHAPE,请勿更改元素数目。

    %读入6幅图像&#xff08;每一幅图像的大小是564*564&#xff09; f1 imread(WashingtonDC_Band1_564.tif); subplot(3,2,1),imshow(f1); f2 imread(WashingtonDC_Band2_564.tif); subplot(3,2,2),imshow(f2); f3 imread(WashingtonDC_Band3_564.tif); subplot(3,2,3),imsho…...

    2022/11/19 21:17:16
  27. 配置 已完成 请勿关闭计算机,win7系统关机提示“配置Windows Update已完成30%请勿关闭计算机...

    win7系统关机提示“配置Windows Update已完成30%请勿关闭计算机”问题的解决方法在win7系统关机时如果有升级系统的或者其他需要会直接进入一个 等待界面&#xff0c;在等待界面中我们需要等待操作结束才能关机&#xff0c;虽然这比较麻烦&#xff0c;但是对系统进行配置和升级…...

    2022/11/19 21:17:15
  28. 台式电脑显示配置100%请勿关闭计算机,“准备配置windows 请勿关闭计算机”的解决方法...

    有不少用户在重装Win7系统或更新系统后会遇到“准备配置windows&#xff0c;请勿关闭计算机”的提示&#xff0c;要过很久才能进入系统&#xff0c;有的用户甚至几个小时也无法进入&#xff0c;下面就教大家这个问题的解决方法。第一种方法&#xff1a;我们首先在左下角的“开始…...

    2022/11/19 21:17:14
  29. win7 正在配置 请勿关闭计算机,怎么办Win7开机显示正在配置Windows Update请勿关机...

    置信有很多用户都跟小编一样遇到过这样的问题&#xff0c;电脑时发现开机屏幕显现“正在配置Windows Update&#xff0c;请勿关机”(如下图所示)&#xff0c;而且还需求等大约5分钟才干进入系统。这是怎样回事呢&#xff1f;一切都是正常操作的&#xff0c;为什么开时机呈现“正…...

    2022/11/19 21:17:13
  30. 准备配置windows 请勿关闭计算机 蓝屏,Win7开机总是出现提示“配置Windows请勿关机”...

    Win7系统开机启动时总是出现“配置Windows请勿关机”的提示&#xff0c;没过几秒后电脑自动重启&#xff0c;每次开机都这样无法进入系统&#xff0c;此时碰到这种现象的用户就可以使用以下5种方法解决问题。方法一&#xff1a;开机按下F8&#xff0c;在出现的Windows高级启动选…...

    2022/11/19 21:17:12
  31. 准备windows请勿关闭计算机要多久,windows10系统提示正在准备windows请勿关闭计算机怎么办...

    有不少windows10系统用户反映说碰到这样一个情况&#xff0c;就是电脑提示正在准备windows请勿关闭计算机&#xff0c;碰到这样的问题该怎么解决呢&#xff0c;现在小编就给大家分享一下windows10系统提示正在准备windows请勿关闭计算机的具体第一种方法&#xff1a;1、2、依次…...

    2022/11/19 21:17:11
  32. 配置 已完成 请勿关闭计算机,win7系统关机提示“配置Windows Update已完成30%请勿关闭计算机”的解决方法...

    今天和大家分享一下win7系统重装了Win7旗舰版系统后&#xff0c;每次关机的时候桌面上都会显示一个“配置Windows Update的界面&#xff0c;提示请勿关闭计算机”&#xff0c;每次停留好几分钟才能正常关机&#xff0c;导致什么情况引起的呢&#xff1f;出现配置Windows Update…...

    2022/11/19 21:17:10
  33. 电脑桌面一直是清理请关闭计算机,windows7一直卡在清理 请勿关闭计算机-win7清理请勿关机,win7配置更新35%不动...

    只能是等着&#xff0c;别无他法。说是卡着如果你看硬盘灯应该在读写。如果从 Win 10 无法正常回滚&#xff0c;只能是考虑备份数据后重装系统了。解决来方案一&#xff1a;管理员运行cmd&#xff1a;net stop WuAuServcd %windir%ren SoftwareDistribution SDoldnet start WuA…...

    2022/11/19 21:17:09
  34. 计算机配置更新不起,电脑提示“配置Windows Update请勿关闭计算机”怎么办?

    原标题&#xff1a;电脑提示“配置Windows Update请勿关闭计算机”怎么办&#xff1f;win7系统中在开机与关闭的时候总是显示“配置windows update请勿关闭计算机”相信有不少朋友都曾遇到过一次两次还能忍但经常遇到就叫人感到心烦了遇到这种问题怎么办呢&#xff1f;一般的方…...

    2022/11/19 21:17:08
  35. 计算机正在配置无法关机,关机提示 windows7 正在配置windows 请勿关闭计算机 ,然后等了一晚上也没有关掉。现在电脑无法正常关机...

    关机提示 windows7 正在配置windows 请勿关闭计算机 &#xff0c;然后等了一晚上也没有关掉。现在电脑无法正常关机以下文字资料是由(历史新知网www.lishixinzhi.com)小编为大家搜集整理后发布的内容&#xff0c;让我们赶快一起来看一下吧&#xff01;关机提示 windows7 正在配…...

    2022/11/19 21:17:05
  36. 钉钉提示请勿通过开发者调试模式_钉钉请勿通过开发者调试模式是真的吗好不好用...

    钉钉请勿通过开发者调试模式是真的吗好不好用 更新时间:2020-04-20 22:24:19 浏览次数:729次 区域: 南阳 > 卧龙 列举网提醒您:为保障您的权益,请不要提前支付任何费用! 虚拟位置外设器!!轨迹模拟&虚拟位置外设神器 专业用于:钉钉,外勤365,红圈通,企业微信和…...

    2022/11/19 21:17:05
  37. 配置失败还原请勿关闭计算机怎么办,win7系统出现“配置windows update失败 还原更改 请勿关闭计算机”,长时间没反应,无法进入系统的解决方案...

    前几天班里有位学生电脑(windows 7系统)出问题了&#xff0c;具体表现是开机时一直停留在“配置windows update失败 还原更改 请勿关闭计算机”这个界面&#xff0c;长时间没反应&#xff0c;无法进入系统。这个问题原来帮其他同学也解决过&#xff0c;网上搜了不少资料&#x…...

    2022/11/19 21:17:04
  38. 一个电脑无法关闭计算机你应该怎么办,电脑显示“清理请勿关闭计算机”怎么办?...

    本文为你提供了3个有效解决电脑显示“清理请勿关闭计算机”问题的方法&#xff0c;并在最后教给你1种保护系统安全的好方法&#xff0c;一起来看看&#xff01;电脑出现“清理请勿关闭计算机”在Windows 7(SP1)和Windows Server 2008 R2 SP1中&#xff0c;添加了1个新功能在“磁…...

    2022/11/19 21:17:03
  39. 请勿关闭计算机还原更改要多久,电脑显示:配置windows更新失败,正在还原更改,请勿关闭计算机怎么办...

    许多用户在长期不使用电脑的时候&#xff0c;开启电脑发现电脑显示&#xff1a;配置windows更新失败&#xff0c;正在还原更改&#xff0c;请勿关闭计算机。。.这要怎么办呢&#xff1f;下面小编就带着大家一起看看吧&#xff01;如果能够正常进入系统&#xff0c;建议您暂时移…...

    2022/11/19 21:17:02
  40. 还原更改请勿关闭计算机 要多久,配置windows update失败 还原更改 请勿关闭计算机,电脑开机后一直显示以...

    配置windows update失败 还原更改 请勿关闭计算机&#xff0c;电脑开机后一直显示以以下文字资料是由(历史新知网www.lishixinzhi.com)小编为大家搜集整理后发布的内容&#xff0c;让我们赶快一起来看一下吧&#xff01;配置windows update失败 还原更改 请勿关闭计算机&#x…...

    2022/11/19 21:17:01
  41. 电脑配置中请勿关闭计算机怎么办,准备配置windows请勿关闭计算机一直显示怎么办【图解】...

    不知道大家有没有遇到过这样的一个问题&#xff0c;就是我们的win7系统在关机的时候&#xff0c;总是喜欢显示“准备配置windows&#xff0c;请勿关机”这样的一个页面&#xff0c;没有什么大碍&#xff0c;但是如果一直等着的话就要两个小时甚至更久都关不了机&#xff0c;非常…...

    2022/11/19 21:17:00
  42. 正在准备配置请勿关闭计算机,正在准备配置windows请勿关闭计算机时间长了解决教程...

    当电脑出现正在准备配置windows请勿关闭计算机时&#xff0c;一般是您正对windows进行升级&#xff0c;但是这个要是长时间没有反应&#xff0c;我们不能再傻等下去了。可能是电脑出了别的问题了&#xff0c;来看看教程的说法。正在准备配置windows请勿关闭计算机时间长了方法一…...

    2022/11/19 21:16:59
  43. 配置失败还原请勿关闭计算机,配置Windows Update失败,还原更改请勿关闭计算机...

    我们使用电脑的过程中有时会遇到这种情况&#xff0c;当我们打开电脑之后&#xff0c;发现一直停留在一个界面&#xff1a;“配置Windows Update失败&#xff0c;还原更改请勿关闭计算机”&#xff0c;等了许久还是无法进入系统。如果我们遇到此类问题应该如何解决呢&#xff0…...

    2022/11/19 21:16:58
  44. 如何在iPhone上关闭“请勿打扰”

    Apple’s “Do Not Disturb While Driving” is a potentially lifesaving iPhone feature, but it doesn’t always turn on automatically at the appropriate time. For example, you might be a passenger in a moving car, but your iPhone may think you’re the one dri…...

    2022/11/19 21:16:57