CV:翻译并解读2019《A Survey of the Recent Architectures of Deep Convolutional Neural Networks》第一章~第三章

导读:深度卷积神经网络的最新架构综述

 

原作者

Asifullah Khan1, 2*, Anabia Sohail1, 2, Umme Zahoora1, and Aqsa Saeed Qureshi1
1 Pattern Recognition Lab, DCIS, PIEAS, Nilore, Islamabad 45650, Pakistan
2 Deep Learning Lab, Center for Mathematical Sciences, PIEAS, Nilore, Islamabad 45650, Pakistan
asif@pieas.edu.pk

更新中……

相关文章
CV:翻译并解读2019《A Survey of the Recent Architectures of Deep Convolutional Neural Networks》第一章~第三章
CV:翻译并解读2019《A Survey of the Recent Architectures of Deep Convolutional Neural Networks》第四章
CV:翻译并解读2019《A Survey of the Recent Architectures of Deep Convolutional Neural Networks》第五章~第八章

 

 

目录

Abstract

1、Introduction

2 Basic CNN Components

2.1 Convolutional Layer

2.2 Pooling Layer

2.3 Activation Function

2.4 Batch Normalization

2.5 Dropout

2.6 Fully Connected Layer

3 Architectural Evolution of Deep CNN

3.1 Late 1980s-1999: Origin of CNN

3.2 Early 2000: Stagnation of CNN

3.3 2006-2011: Revival of CNN

3.4 2012-2014: Rise of CNN

3.5 2015-Present: Rapid increase in Architectural Innovations and Applications of CNN


 

 

Abstract

        Deep Convolutional Neural Networks (CNNs) are a special type of Neural Networks, which have shown state-of-the-art performance on various competitive benchmarks. The powerful learning ability of deep CNN is largely due to the use of multiple feature extraction stages (hidden layers) that can automatically learn representations from the data. Availability of a large amount of data and improvements in the hardware processing units have accelerated the research in CNNs, and recently very interesting deep CNN architectures are reported. The recent race in developing deep CNNs shows that the innovative architectural ideas, as well as parameter optimization, can improve CNN performance. In this regard, different ideas in the CNN design have been explored such as the use of different activation and loss functions, parameter optimization, regularization, and restructuring of the processing units. However, the major improvement in representational capacity of the deep CNN is achieved by the restructuring of the processing units. Especially, the idea of using a block as a structural unit instead of a layer is receiving substantial attention. This survey thus focuses on the intrinsic taxonomy present in the recently reported deep CNN architectures and consequently, classifies the recent innovations in CNN architectures into seven different categories. These seven categories are based on spatial exploitation, depth, multi-path, width, feature map exploitation, channel boosting, and attention. Additionally, this survey also covers the elementary understanding of CNN components and sheds light on its current challenges and applications.

       深度卷积神经网络(CNNs)是一种特殊类型的神经网络,在各种竞争性基准测试中表现出了最先进的性能。深度CNN强大的学习能力很大程度上是由于它使用了多个特征提取阶段(隐含层),可以从数据中自动学习表示大量数据的可用性和硬件处理单元的改进加速了CNNs的研究,并且,最近报道了非常有意思的深度CNN架构。最近开发深度CNNs的竞赛表明,创新的架构思想和参数优化可以提高CNN的性能。为此,在CNN的设计中探索了不同的思路,如使用不同的激活和丢失函数参数优化正则化以及处理单元的重组。然而,深度CNN的代表性能力的主要提高是通过处理单元的重组实现的。特别是,使用一个块作为一个结构单元而不是一层的想法正在得到大量的关注。因此,本次调查的重点是最近报道的深度CNN架构的内在分类,因此,将CNN架构的最新创新分为七个不同的类别。这七个类别分别基于空间开发、深度、多路径、宽度、特征地图开发、通道提升和注意力机制。此外,本调查还涵盖了对CNN组件的基本理解,并阐明了其当前的挑战和应用。

Keywords: Deep Learning, Convolutional Neural Networks, Architecture, Representational Capacity, Residual Learning, and Channel Boosted CNN.

关键词:深度学习,卷积神经网络,架构,表征能力,残差学习,通道提升的CNN

 

1、Introduction

        Machine Learning (ML) algorithms belong to a specialized area in Artificial Intelligence (AI), which endows intelligence to computers by learning the underlying relationships among the data and making decisions without being explicitly programmed. Different ML algorithms have been developed since the late 1990s, for the emulation of human sensory responses such as speech and vision, but they have generally failed to achieve human-level satisfaction [1]–[6]. The challenging nature of Machine Vision (MV) tasks gives rise to a specialized class of Neural Networks (NN), known as Convolutional Neural Network (CNN) [7].

   机器学习(ML)算法属于人工智能(AI)的一个专门领域,它通过学习数据之间的基本关系并在没有显示编程的情况下做出决策,从而赋予计算机智能。自20世纪90年代末以来,针对语音、视觉等人类感官反应的仿真,人们开发了各种各样的ML算法,但普遍未能达到人的满意程度[1]-[6]。由于机器视觉(MV)任务的挑战性,产生了一类专门的神经网络(NN),称为卷积神经网络(CNN)[7]。

     CNNs are considered as one of the best techniques for learning image content and have shown state-of-the-art results on image recognition, segmentation, detection, and retrieval related tasks [8], [9]. The success of CNN has captured attention beyond academia. In industry, companies such as Google, Microsoft, AT&T, NEC, and Facebook have developed active research groups for exploring new architectures of CNN [10]. At present, most of the frontrunners of image processing competitions are employing deep CNN based models.

CNNs被认为是学习图像内容的最佳技术之一,在图像识别、分割、检测和检索相关任务[8]、[9]方面已经取得了最新的成果。CNN的成功吸引了学术界以外的关注。在业界,谷歌、微软、AT&T、NEC、Facebook等公司都建立了活跃的研究小组,探索CNN[10]的新架构。目前,大多数图像处理竞赛的领跑者,都在使用基于深度CNN的模型
The topology of CNN is divided into multiple learning stages composed of a combination of the convolutional layer, non-linear processing units, and subsampling layers [11]. Each layer performs multiple transformations using a bank of convolutional kernels (filters) [12]. Convolution operation extracts locally correlated features by dividing the image into small slices (similar to the retina of the human eye), making it capable of learning suitable features. Output of the convolutional kernels is assigned to non-linear processing units, which not only helps in learning abstraction but also embeds non-linearity in the feature space. This non-linearity generates different patterns of activations for different responses and thus facilitates in learning of semantic differences in images. Output of the non-linear function is usually followed by subsampling, which helps in summarizing the results and also makes the input invariant to geometrical distortions [12], [13].CNN的拓扑结构分为多个学习阶段,包括卷积层、非线性处理单元和子采样层的组合[11]。每一层使用一组卷积核(滤波器)执行多重变换[12]。卷积操作通过将图像分割成小块(类似于人眼视网膜)来提取局部相关特征,使其能够学习合适的特征。卷积核的输出被分配给非线性处理单元,这不仅有助于学习抽象,而且在特征空间中嵌入非线性。这种非线性会为不同的反应产生不同的激活模式,从而有助于学习图像中的语义差异非线性函数的输出通常随后是子采样,这有助于总结结果,并使输入对几何畸变保持不变[12],[13]。
The architectural design of CNN was inspired by Hubel and Wiesel’s work and thus largely follows the basic structure of primate’s visual cortex [14], [15]. CNN first came to limelight through the work of LeCuN in 1989 for the processing of grid-like topological data (images and time series data) [7], [16]. The popularity of CNN is largely due to its hierarchical feature extraction ability. Hierarchical organization of CNN emulates the deep and layered learning process of the Neocortex in the human brain, which automatically extract features from the underlying data [17]. The staging of learning process in CNN shows quite resemblance with primate’s ventral pathway of visual cortex (V1-V2-V4-IT/VTC) [18]. The visual cortex of primates first receives input from the retinotopic area, where multi-scale highpass filtering and contrast normalization is performed by the lateral geniculate nucleus. After this, detection is performed by different regions of the visual cortex categorized as V1, V2, V3, and V4. In fact, V1 and V2 portion of visual cortex are similar to convolutional, and subsampling layers, whereas inferior temporal region resembles the higher layers of CNN, which makes inference about the image [19]. During training, CNN learns through backpropagation algorithm, by regulating the change in weights with respect to the input. Minimization of a cost function by CNN using backpropagation algorithm is similar to the response based learning of human brain. CNN has the ability to extract low, mid, and high-level features. High level features (more abstract features) are a combination of lower and mid-level features. With the automatic feature extraction ability, CNN reduces the need for synthesizing a separate feature extractor [20]. Thus, CNN can learn good internal representation from raw pixels with diminutive processing.CNN的架构设计灵感来自于Hubel和Wiesel的工作,因此很大程度上遵循了灵长类动物视觉皮层的基本结构[14],[15]。CNN最早是在1989年通过LeCuN的工作引起了人们的注意,它处理了网格状的拓扑数据(图像和时间序列数据)[7],[16]。CNN的流行很大程度上是由于它的层次特征提取能力。CNN的分层组织模拟人脑皮层的深层和分层学习过程,它自动从底层数据中提取特征[17]。CNN中学习过程的分期与灵长类视觉皮层腹侧通路(V1-V2-V4-IT/VTC)非常相似[18]。灵长类动物的视觉皮层首先接收来自视黄醇区的输入,在视黄醇区,外侧膝状体核进行多尺度高通滤波和对比度归一化。之后,由视觉皮层的不同区域进行检测,这些区域分为V1、V2、V3和V4。事实上,视觉皮层的V1和V2部分与卷积层和亚采样层相似,而颞下区与CNN的高层相似,后者对图像进行推断[19]。在训练过程中,CNN通过反向传播算法学习,通过调节输入权重的变化。使用反向传播算法的CNN最小化代价函数类似于基于响应的人脑学习。CNN能够提取低、中、高级特征。高级特征(更抽象的特征)是低级和中级特征的组合。具有自动特征提取功能,CNN减少了合成单独特征提取器的需要[20]。因此,CNN可以通过较小的处理从原始像素中学习良好的内部表示
The main boom in the use of CNN for image classification and segmentation occurred after it was observed that the representational capacity of a CNN can be enhanced by increasing its depth [21]. Deep architectures have an advantage over shallow architectures, when dealing with complex learning problems. Stacking of multiple linear and non-linear processing units in a layer wise fashion provides deep networks the ability to learn complex representations at different levels of abstraction. In addition, advancements in hardware and thus the availability of high computing resources is also one of the main reasons of the recent success of deep CNNs. Deep CNN architectures have shown significant performance of improvements over shallow and conventional vision based models. Apart from its use in supervised learning, deep CNNs have potential to learn useful representation from large scale of unlabeled data. Use of the multiple mapping functions by CNN enables it to improve the extraction of invariant representations and consequently, makes it capable to handle recognition tasks of hundreds of categories. Recently, it is shown that different level of features including both low and high-level can be transferred to a generic recognition task by exploiting the concept of Transfer Learning (TL) [22]–[24]. Important attributes of CNN are hierarchical learning, automatic feature extraction, multi-tasking, and weight sharing [25]–[27].CNN用于图像分类和分割的主要兴起发生在观察到CNN的表示能力可以通过增加其深度来增强之后[21]。在处理复杂的学习问题时,深度架构比浅层架构具有优势。以分层方式堆叠多个线性和非线性处理单元,使深层网络能够在不同抽象级别学习复杂表示。此外,硬件的进步以及高计算资源的可用性也是deep CNNs最近成功的主要原因之一。深度CNN架构已经显示出比浅层和传统的基于视觉的模型有显著改进的性能。除了在监督学习中的应用外,深度CNN还具有从大规模未标记数据中学习有用表示的潜力。利用CNN的多重映射函数,提高了不变量表示的提取效率,使其能够处理数百个类别的识别任务。近年来,研究表明,利用迁移学习(TL)[22]-[24]的概念,可以将包括低层和高层特征在内的不同层次的特征,转化为一般的识别任务。CNN的重要特性是分层学习、自动特征提取、多任务处理和权重共享[25]-[27]。

        Various improvements in CNN learning strategy and architecture were performed to make CNN scalable to large and complex problems. These innovations can be categorized as parameter optimization, regularization, structural reformulation, etc. However, it is observed that CNN based applications became prevalent after the exemplary performance of AlexNet on ImageNet dataset [21]. Thus major innovations in CNN have been proposed since 2012 and were mainly due to restructuring of processing units and designing of new blocks. Similarly, Zeiler and Fergus [28] introduced the concept of layer-wise visualization of features, which shifted the trend towards extraction of features at low spatial resolution in deep architecture such as VGG [29]. Nowadays, most of the new architectures are built upon the principle of simple and homogenous topology introduced by VGG. On the other hand, Google group introduced an interesting idea of split, transform, and merge, and the corresponding block is known as inception block. The inception block for the very first time gave the concept of branching within a layer, which allows abstraction of features at different spatial scales [30]. In 2015, the concept of skip connections introduced by ResNet [31] for the training of deep CNNs got famous, and afterwards, this concept was used by most of the succeeding Nets, such as Inception-ResNet, WideResNet, ResNext, etc [32]–[34].

在CNN学习策略和体系结构方面进行了各种改进,使CNN能够扩展到大型复杂问题。这些创新可分为参数优化、正则化、结构重构等。然而,据观察,在AlexNet在ImageNet数据集上的示范性能之后,基于CNN的应用变得普遍[21]。因此,自2012年以来,CNN提出了重大创新,主要归功于处理单元的重组和新区块的设计。类似地,Zeiler和Fergus[28]引入了特征分层可视化的概念,这改变了深度架构(如VGG[29])中以低空间分辨率提取特征的趋势。目前,大多数新的体系结构都是基于VGG提出的简单、同质的拓扑结构原理。另一方面,Google group引入了一个有趣的拆分、转换和合并的概念,相应的块称为inception块。inception块第一次给出了层内分支的概念,允许在不同的空间尺度上抽象特征[30]。2015年,ResNet[31]提出的用于训练深层CNNs的skip连接的概念很出名,之后,这个概念被大多数后续网络使用,如Inception ResNet、WideResNet、ResNext等[32]-[34]。

        In order to improve the learning capacity of a CNN, different architectural designs such as WideResNet, Pyramidal Net, Xception etc. explored the effect of multilevel transformations in terms of an additional cardinality and increase in width [32], [34], [35]. Therefore, the focus of research shifted from parameter optimization and connections readjustment towards improved architectural design (layer structure) of the network. This shift resulted in many new architectural ideas such as channel boosting, spatial and channel wise exploitation and attention based information processing etc. [36]–[38].

为了提高CNN的学习能力,不同的结构设计,如WideResNet、金字塔网、exception等,从增加基数和增加宽度的角度探讨了多级转换的效果[32]、[34]、[35]。因此,研究的重点从网络的参数优化和连接调整转向网络的改进结构设计(层结构)。这种转变产生了许多新的架构思想,如信道增强、空间和信道利用以及基于注意力的信息处理等[36]-[38]。
In the past few years, different interesting surveys are conducted on deep CNNs that elaborate the basic components of CNN and their alternatives. The survey reported by [39] has reviewed the famous architectures from 2012-2015 along with their components. Similarly, in the literature, there are prominent surveys that discuss different algorithms of CNN and focus on applications of CNN [20], [26], [27], [40], [41]. Likewise, the survey presented in [42] discussed taxonomy of CNNs based on acceleration techniques. On the other hand, in this survey, we discuss the intrinsic taxonomy present in the recent and prominent CNN architectures. The various CNN architectures discussed in this survey can be broadly classified into seven main categories namely; spatial exploitation, depth, multi-path, width, feature map exploitation, channel boosting, and attention based CNNs. The rest of the paper is organized in the following order (shown in Fig. 1): Section 1 summarizes the underlying basics of CNN, its resemblance with primate’s visual cortex, as well as its contribution in MV. In this regard, Section 2 provides the overview on basic CNN components and Section 3 discusses the architectural evolution of deep CNNs. Whereas, Section 4, discusses the recent innovations in CNN architectures and categorizes CNNs into seven broad classes. Section 5 and 6 shed light on applications of CNNs and current challenges, whereas section 7 discusses future work and last section draws conclusion.在过去的几年里,对深度CNN进行了不同有趣的调查,阐述了CNN的基本组成部分及其替代方案。[39]报告的调查回顾了2012-2015年著名架构及其组成部分。类似地,在文献中,有一些著名的调查讨论了CNN的不同算法,并着重于CNN的应用[20]、[26]、[27]、[40]、[41]。同样,在[42]中提出的调查讨论了基于加速技术的CNNs分类。另一方面,在这项调查中,我们讨论了在最近和著名的CNN架构中存在的内在分类法。本次调查中讨论的各种CNN架构大致可分为七大类,即:空间开发、深度、多径、宽度、特征地图开发、信道增强和基于注意力的CNN。论文的其余部分按以下顺序组织(如图1所示):第1节总结了CNN的基本原理,它与灵长类视觉皮层的相似性,以及它在MV中的贡献。在这方面,第2节概述了基本CNN组件,第3节讨论了deep CNNs的体系结构演变。第4节讨论了CNN体系结构的最新创新,并将CNN分为七大类。第5节和第6节阐述了CNNs的应用和当前面临的挑战,第7节讨论了未来的工作,最后一节得出结论。

                                                                  Fig. 1: Organization of the survey paper.

 

2 Basic CNN Components

        Nowadays, CNN is considered as the most widely used ML technique, especially in vision related applications. CNNs have recently shown state-of-the-art results in various ML applications. A typical block diagram of an ML system is shown in Fig. 2. Since, CNN possesses both good feature extraction and strong discrimination ability, therefore in a ML system; it is mostly used for feature extraction and classification.

   目前,CNN被认为是应用最广泛的ML技术,尤其是在视觉相关应用中。CNNs最近在各种ML应用中显示了最新的结果。ML系统的典型框图如图2所示。由于CNN具有良好的特征提取和较强的识别能力,因此在ML系统中,它主要用于特征提取和分类。

A typical CNN architecture generally comprises of alternate layers of convolution and pooling followed by one or more fully connected layers at the end. In some cases, fully connected layer is replaced with global average pooling layer. In addition to the various learning stages, different regulatory units such as batch normalization and dropout are also incorporated to optimize CNN performance [43]. The arrangement of CNN components play a fundamental role in designing new architectures and thus achieving enhanced performance. This section briefly discusses the role of these components in CNN architecture.

典型的CNN体系结构,通常包括交替的卷积层和池化,最后是一个或多个完全连接的层。在某些情况下,完全连接层被替换为全局平均池层。除了不同的学习阶段,不同的常规单位,如 batch normalization和dropout,也被纳入优化CNN的表现[43]。CNN组件的排列在设计新的体系结构和提高性能方面起着基础性的作用。本节简要讨论这些组件在CNN架构中的作用。

 

2.1 Convolutional Layer

        Convolutional layer is composed of a set of convolutional kernels (each neuron act as a kernel). These kernels are associated with a small area of the image known as a receptive field. It works by dividing the image into small blocks (receptive fields) and convolving them with a specific set of weights (multiplying elements of the filter with the corresponding receptive field elements) [43]. Convolution operation can expressed as follows:

       卷积层由一组卷积核组成(每个神经元充当一个核)。这些核与被称为感受野的图像的一小部分相关。它的工作原理是将图像分割成小的块(接收场),并用一组特定的权重(将滤波器的元素与相应的接收场元素相乘)[43]。卷积运算可以表示为:

                                           

    Where, the input image is represented by x, y I , , xy shows spatial locality and k
l K represents the lth convolutional kernel of the kth layer. Division of image into small blocks helps in extracting locally correlated pixel values. This locally aggregated information is also known as feature motif. Different set of features within image are extracted by sliding convolutional kernel on the whole image with the same set of weights. This weight sharing feature of convolution operation makes CNN parameter efficient as compared to fully connected Nets. Convolution operation may further be categorized into different types based on the type and size of filters, type of padding, and the direction of convolution [44]. Additionally, if the kernel is symmetric, the convolution operation becomes a correlation operation [16].

其中,输入图像由x,y I,x y表示空间局部性,k l k表示第k层的第l卷积核。将图像分割成小块有助于提取局部相关像素值。这种局部聚集的信息也被称为特征模体。在相同的权值集下,通过滑动卷积核提取图像中不同的特征集。与全连通网络相比,卷积运算的这种权值共享特性使得CNN参数更有效。卷积操作还可以基于滤波器的类型和大小、填充的类型和卷积的方向而被分为不同的类型[44]。另外,如果核是对称的,卷积操作就变成相关性操作[16]。

 

2.2 Pooling Layer

        Feature motifs, which result as an output of convolution operation can occur at different locations in the image. Once features are extracted, its exact location becomes less important as long as its approximate position relative to others is preserved. Pooling or downsampling like convolution, is an interesting local operation. It sums up similar information in the neighborhood of the receptive field and outputs the dominant response within this local region [45].

       卷积运算输出的特图案可以出现在图像的不同位置。一旦特征被提取,其精确位置就变得不那么重要了,只要其相对于其他位置的近似位置被保留。像卷积一样的池化或下采样是一种有趣的本地操作。它总结了接受野附近的相似信息,并输出了该局部区域内的主导反应[45]。

                                           

Equation (2) shows the pooling operation in which l Z represents the lth output feature map, ,lxyF  shows the lth input feature map, whereas p f (.) defines the type of pooling operation. The use ofpooling operation helps to extract a combination of features, which are invariant to translational shifts and small distortions [13], [46]. Reduction in the size of feature map to invariant feature set not only regulates complexity of the network but also helps in increasing the generalization by reducing overfitting. Different types of pooling formulations such as max, average, L2, overlapping, spatial pyramid pooling, etc. are used in CNN [47]–[49].

        等式(2)表示池操作,其中l Z表示lth输出特征映射,lxyF表示lth输入特征映射,而p f(.)定义池操作的类型。使用pooling操作有助于提取特征的组合,这些特征对平移位移和小的失真是不变的[13],[46]。将特征映射的大小减少到不变特征集不仅可以调节网络的复杂度,而且有助于通过减少过拟合来增加泛化。CNN中使用了不同类型的池公式,如max、average、L2、overlapping、空间金字塔池化等[47]–[49]。

 

2.3 Activation Function

        Activation function serves as a decision function and helps in learning a complex pattern. Selection of an appropriate activation function can accelerate the learning process. Activation function for a convolved feature map is defined in equation (3).

       激活函数作为一个决策函数,有助于学习一个复杂的模式。选择合适的激活函数可以加速学习过程。卷积特征映射的激活函数在方程(3)中定义。

In above equation, k l F is an output of a convolution operation, which is assigned to activation  function; A f (.) that adds non-linearity and returns a transformed output k  l T for kth layer. In  literature, different activation functions such as sigmoid, tanh, maxout, ReLU, and variants of  ReLU such as leaky ReLU, ELU, and PReLU [39], [48], [50], [51] are used to inculcate nonlinear  combination of features. However, ReLU and its variants are preferred over others  activations as it helps in overcoming the vanishing gradient problem [52], [53].

     在上面的等式中,k l F是卷积运算的输出,该卷积运算被分配给激活函数;F(.)添加非线性并返回第k层的转换输出k l T。在文献中,不同的激活函数如sigmoid、tanh、maxout、ReLU和ReLU的变体如leaky ReLU、ELU和PReLU[39]、[48]、[50]、[51]被用来灌输特征的非线性组合。然而,ReLU及其变体比其他激活更受欢迎,因为它有助于克服消失梯度问题[52],[53]。
         Fig. 2: Basic layout of a typical ML system. In ML related tasks, initially data is preprocessed and then assigned to a classification system. A typical ML problem follows three steps: stage 1 is related to data gathering and generation, stage 2 performs preprocessing and feature selection, whereas stage 3 is based on model selection, parameter tuning, and analysis. CNN has a good feature extraction and strong discrimination ability, therefore in a ML system; it can be used for feature extraction and classification.   图2:典型ML系统的基本布局。在与ML相关的任务中,首先对数据进行预处理,然后将其分配给分类系统。一个典型的ML问题有三个步骤:阶段1与数据收集和生成相关,阶段2执行预处理和特征选择,而阶段3基于模型选择、参数调整和分析。CNN具有很好的特征提取能力和较强的识别能力,因此在ML系统中可以用于特征提取和分类。

 

 

2.4 Batch Normalization

注:根据博主的经验,此处常为考点!

        Batch normalization is used to address the issues related to internal covariance shift within feature maps. The internal covariance shift is a change in the distribution of hidden units’ values, which slow down the convergence (by forcing learning rate to small value) and requires careful initialization of parameters. Batch normalization for a transformed feature map k lT is shown in equation (4).

       批处理规范化用于解决与特征映射内部协方差偏移相关的问题。内协方差偏移是隐藏单元值分布的一种变化,它会减慢收敛速度(通过强制学习速率为小值),并且需要谨慎的初始化参数。转换后的特征映射k lT的批处理规范化如等式(4)所示。

                                            

      In equation (4), k l N represents normalized feature map, kl F is the input feature map, B and 2 B   depict mean and variance of a feature map for a mini batch respectively. Batch normalization  unifies the distribution of feature map values by bringing them to zero mean and unit variance [54]. Furthermore, it smoothens the flow of gradient and acts as a regulating factor, which thus helps in improving generalization of the network.

     在式(4)中,k l N表示归一化特征映射,kl F是输入特征映射,Bμ和2 B分别表示小批量特征映射的均值和方差。批量规范化通过使特征映射值的平均值和单位方差为零来统一分布[54]。此外,它平滑了梯度的流动,起到了调节因子的作用,从而有助于提高网络的泛化能力

 

2.5 Dropout

        Dropout introduces regularization within the network, which ultimately improves generalization by randomly skipping some units or connections with a certain probability. In NNs, multiple connections that learn a non-linear relation are sometimes co-adapted, which causes overfitting [55]. This random dropping of some connections or units produces several thinned network architectures, and finally one representative network is selected with small weights. This selected architecture is then considered as an approximation of all of the proposed networks [56].

       Dropout在网络中引入正则化,通过随机跳过某些具有一定概率的单元或连接,最终提高泛化能力。在NNs中,学习非线性关系的多个连接有时是协同适应的,这会导致过度拟合[55]。一些连接或单元的随机丢弃产生了几种细化的网络结构,最后选择了一种具有代表性的网络结构。然后将所选择的体系结构看作是所提出的所有网络的近似〔56〕。

 

2.6 Fully Connected Layer

       Fully connected layer is mostly used at the end of the network for classification purpose. Unlike pooling and convolution, it is a global operation. It takes input from the previous layer and globally analyses output of all the preceding layers [57]. This makes a non-linear combination of selected features, which are used for the classification of data. [58].

       全连接层主要用于网络末端的分类。与池化和卷积不同,它是一个全局操作。它接受前一层的输入,并全局分析所有前一层的输出[57]。这使得用于数据分类的选定特征的非线性组合。[58]。

                                                             Fig. 3: Evolutionary history of deep CNNs

 

3 Architectural Evolution of Deep CNN

       Nowadays, CNNs are considered as the most widely used algorithms among biologically inspired AI techniques. CNN history begins from the neurobiological experiments conducted by Hubel and Wiesel (1959, 1962) [14], [59]. Their work provided a platform for many cognitive models, almost all of which were latterly replaced by CNN. Over the decades, different efforts have been carried out to improve the performance of CNNs. This history is pictorially represented in Fig. 3. These improvements can be categorized into five different eras and are discussed below.

       目前,CNNs被认为是生物人工智能技术中应用最广泛的算法。CNN的历史始于Hubel和Wiesel(19591962)[14],[59]进行的神经生物学实验。他们的工作为许多认知模型提供了一个平台,几乎所有的认知模型都被CNN所取代。几十年来,人们一直在努力提高CNNs的性能。这段历史在图3中用图形表示这些改进可以分为五个不同的时代,并在下面讨论。

 

3.1 Late 1980s-1999: Origin of CNN

       CNNs have been applied to visual tasks since the late 1980s. In 1989, LeCuN et al. proposed the first multilayered CNN named as ConvNet, whose origin rooted in Fukushima’s Neocognitron [60], [61]. LeCuN proposed supervised training of ConvNet, using Backpropagation algorithm [7], [62] in comparison to the unsupervised reinforcement learning scheme used by its predecessor Neocognitron. LeCuN’s work thus made a foundation for the modern 2D CNNs. Supervised training in CNN provides the automatic feature learning ability from raw input, rather than designing of handcrafted features, used by traditional ML methods. This ConvNet showed successful results for handwritten digit and zip code recognition related problems [63]. In 1998, ConvNet was improved by LeCuN and used for classifying characters in a document recognition application [64]. This modified architecture was named as LeNet-5, which was an improvement over the initial CNN as it can extract feature representation in a hierarchical way from raw pixels [65]. Reliance of LeNet-5 on fewer parameters along with consideration of spatial topology of images enabled CNN to recognize rotational variants of the image [65]. Due to the good performance of CNN in optical character recognition, its commercial use in ATM and Banks started in 1993 and 1996, respectively. Though, many successful milestones were achieved by LeNet-5, yet the main concern associated with it was that its discrimination power was not scaled to classification tasks other than hand recognition.

       自20世纪80年代末以来,CNNs已经被应用于视觉任务中。提出了第一个叫做ConvNet的多层CNN,其起源于Fukushima’s 的Neocognitron[60],[61]。LeCuN提出了ConvNet的有监督训练,使用了Backpropagation算法[7],[62],与其前身Neocognitron使用的无监督强化学习方案相比。他的作品为现代2D CNN奠定了基础。CNN中的监督训练提供了从原始输入中自动学习特征的能力,而不是传统ML方法所使用的手工特征的设计。这个ConvNet显示了手写数字和邮政编码识别相关问题的成功结果[63]。1998年,LeCuN改进了ConvNet,并将其用于文档识别应用程序中的字符分类[64]。这种改进的结构被命名为LeNet-5,这是对初始CNN的改进,因为它可以从原始像素中以分层的方式提取特征表示[65]。LeNet-5对较少参数的依赖以及对图像空间拓扑的考虑使得CNN能够识别图像的旋转变体[65]。由于CNN在光学字符识别方面的良好性能,其在ATM和银行的商业应用分别始于1993年和1996年。尽管LeNet-5取得了许多成功的里程碑,但与之相关的主要问题是它的辨别能力并没有扩展到除手识别以外的分类任务。

 

3.2 Early 2000: Stagnation of CNN

       In the late 1990s and early 2000s, interest in NNs reduced and less attention was given to explore the role of CNNs in different applications such as object detection, video surveillance, etc. Use of CNN in ML related tasks became dormant due to the insignificant improvement in performance at the cost of high computational time. At that time, other statistical methods and, in particular, SVM became more popular than CNN due to its relatively high performance [66]–[68]. It was widely presumed in early 2000 that the backpropagation algorithm used for training of CNN was not effective in converging to optimal points and therefore unable to learn useful features in supervised fashion as compared to handcrafted features [69]. Meanwhile, different researchers kept working on CNN and tried to optimize its performance. In 2003, Simard et al. improved CNN architecture and showed good results as compared to SVM on a hand digit benchmark dataset; MNIST [64], [68], [70]–[72]. This performance improvement expedited the research in CNN by extending its application in optical character recognition (OCR) to other script’s character recognition [72]–[74], deployment in image sensors for face detection in video conferencing, and regulation of street crimes, etc. Likewise, CNN based systems were industrialized in markets for tracking customers [75]–[77]. Moreover, CNN’s potential in other applications such as medical image segmentation, anomaly detection, and robot vision was also explored [78]–[80].

       在20世纪90年代末和21世纪初,人们对神经网络的兴趣逐渐减少,对神经网络在目标检测、视频监控等不同应用中的作用的研究也越来越少。由于性能上的显著提高,在ML相关任务中使用神经网络以牺牲较高的计算时间而变得不活跃。当时,其他统计方法,特别是支持向量机,由于其相对较高的性能而变得比CNN更受欢迎[66]-[68]。2000年初,人们普遍认为,用于CNN训练的反向传播算法在收敛到最优点方面并不有效,因此与手工制作的特征相比,无法以监督方式学习有用的特征[69]。与此同时,不同的研究人员继续研究CNN,并试图优化其性能。2003年,Simard等人。改进了CNN的体系结构,与支持向量机相比,在一个手写数字基准数据集上显示了良好的结果;MNIST[64],[68],[70]–[72]。这种性能的提高加速了CNN的研究,将其在光学字符识别(OCR)中的应用扩展到其他脚本的字符识别[72]-[74],在视频会议中部署用于面部检测的图像传感器,以及对街头犯罪的监管等。同样,基于CNN的系统也在市场上实现了工业化用于跟踪客户[75]–[77]。此外,CNN在医学图像分割、异常检测和机器人视觉等其他应用领域的潜力也得到了探索[78]-[80]。

 

3.3 2006-2011: Revival of CNN

       Deep NNs have generally complex architecture and time intensive training phase that sometimes spanned over weeks and even months. In early 2000, there were only a few techniques for the training of deep Networks. Additionally, it was considered that CNN is not able to scale for complex problems. These challenges halted the use of CNN in ML related tasks.

       深度NNs通常具有复杂的结构和时间密集型训练阶段,有时跨越数周甚至数月。在2000年初,只有少数技术用于训练深层网络。此外,有人认为CNN无法扩展到复杂的问题。这些挑战阻止了CNN在ML相关任务中的应用。             

      To address these problems, in 2006 many interesting methods were reported to overcome the difficulties encountered in the training of deep CNNs and learning of invariant features. Hinton proposed greedy layer-wise pre-training approach in 2006, for deep architectures, which revived and reinstated the importance of deep learning [81], [82]. The revival of a deep learning [83], [84] was one of the factors, which brought deep CNNs into the limelight. Huang et al. (2006) used max pooling instead of subsampling, which showed good results by learning of invariant features [46], [85].                   

为了解决这些问题,2006年报道了许多有趣的方法来克服在训练深层CNNs和学习不变特征方面遇到的困难。Hinton在2006年提出了贪婪的分层预训练方法,用于深层架构,这恢复了深层学习的重要性[81],[82]。深度学习的复兴[83],[84]是其中的一个因素,这使深度cnn成为了焦点。Huang等人。(2006)使用最大值池代替子采样,通过学习不变特征显示了良好的结果[46],[85]
        In late 2006, researchers started using graphics processing units (GPUs) [86], [87] to accelerate training of deep NN and CNN architectures [88], [89]. In 2007, NVIDIA launched the CUDA programming platform [90], [91], which allows exploitation of parallel processing capabilities of GPU with a much greater degree [92]. In essence, the use of GPUs for NN training [88], [93] and other hardware improvements were the main factor, which revived the research in CNN. In 2010, Fei-Fei Li’s group at Stanford, established a large database of images known as ImageNet, containing millions of labeled images [94]. This database was coupled with the annual ImageNet Large Scale Visual Recognition Challenge (ILSVRC) competitions, where the performances of various models have been evaluated and scored [95]. Consequently, ILSVRC and NIPS have been very active in strengthening research and increasing the use of CNN and thus making it popular. This was a turning point in improving the performance and increasing the use of CNN.2006年末,研究人员开始使用图形处理单元(GPU)[86],[87]来加速深度神经网络和CNN架构的训练[88],[89]。2007年,NVIDIA推出了CUDA编程平台[90],[91],它允许在更大程度上利用GPU的并行处理能力[92]。从本质上讲,GPUs在神经网络训练中的应用[88]、[93]和其他硬件的改进是主要因素,这使CNN的研究重新活跃起来。2010年,李飞飞在斯坦福大学的团队建立了一个名为ImageNet的大型图像数据库,其中包含数百万个标记图像[94]。该数据库与年度ImageNet大型视觉识别挑战赛(ILSVRC)相结合,对各种模型的性能进行了评估和评分[95]。因此,ILSVRC和NIPS在加强研究和增加CNN的使用方面非常积极,从而使其流行起来。这是一个转折点,在提高性能和增加使用有线电视新闻网。

 

3.4 2012-2014: Rise of CNN

       Availability of big training data, hardware advancements, and computational resources contributed to advancement in CNN algorithms. Renaissance of CNN in object detection, image classification, and segmentation related tasks had been observed in this period [9], [96]. However, the success of CNN in image classification tasks was not only due to the result of aforementioned factors but largely contributed by the architectural modifications, parameter optimization, incorporation of regulatory units, and reformulation and readjustment of connections within the network [39], [42], [97].

       大训练数据的可用性、硬件的先进性和计算资源有助于CNN算法的进步。CNN在目标检测、图像分类和与分割相关的任务方面的复兴在这一时期已经被观察到了[9],[96]。然而,CNN在图像分类任务中的成功不仅是由于上述因素的结果,而且在很大程度上是由于结构的修改、参数的优化、调节单元的合并以及网络内连接的重新制定和调整[39]、[42]、[97]。γi

       The main breakthrough in CNN performance was brought by AlexNet [21]. AlexNet won the 2012-ILSVRC competition, which has been one of the most difficult challenges in image detection and classification. AlexNet improved performance by exploiting depth (incorporating multiple levels of transformation) and introduced regularization term in CNN. The exemplary performance of AlexNet [21] compared to conventional ML techniques in 2012-ILSVRC (AlexNet reduced error rate from 25.8 to 16.4) suggested that the main reason of the saturation in CNN performance before 2006 was largely due to the unavailability of enough training data and computational resources. In summary, before 2006, these resource deficiencies made it hard to train a high-capacity CNN without deterioration of performance [98].             

CNN的主要突破是由AlexNet带来的[21]。AlexNet赢得了2012-ILSVRC比赛,这是图像检测和分类领域最困难的挑战之一。AlexNet利用深度(包含多个层次的转换)提高了性能,并在CNN中引入了正则化项。与2012-ILSVRC(AlexNet将错误率从25.8降低到16.4)中的传统ML技术相比,AlexNet的示例性性能[21]表明,2006年之前CNN性能饱和的主要原因是缺乏足够的训练数据和计算资源。总之,在2006年之前,这些资源不足使得在不降低性能的情况下难以训练高容量CNN[98]
      With CNN becoming more of a commodity in the computer vision (CV) field, a number of attempts have been made to improve the performance of CNN with reduced computational cost. Therefore, each new architecture try to overcome the shortcomings of previously proposed architecture in combination with new structural reformulations. In year 2013 and 2014, researchers mainly focused on parameter optimization to accelerate CNN performance in a range of applications with a small increase in computational complexity. In 2013, Zeiler and Fergus [28] defined a mechanism to visualize learned filters of each CNN layer. Visualization approach was used to improve the feature extraction stage by reducing the size of the filters. Similarly, VGG architecture [29] proposed by the Oxford group, which was runner-up at the 2014-ILSVRC competition, made the receptive field much smaller in comparison to that of AlexNet but, with increased volume. In VGG, depth was increased from 9 layers to 16, by making the volume of features maps double at each layer. In the same year, GoogleNet [99] that won 2014-ILSVRC competition, not only exerted its efforts to reduce computational cost by changing layer design, but also widened the width in compliance with depth to improve CNN performance. GoogleNet introduced the concept of split, transform, and merge based blocks, within which multiscale and multilevel transformation is incorporated to capture both local and global information [33], [99], [100]. The use of multilevel transformations helps CNN in tackling details of images at various levels. In the year 2012-14, the main improvement in the learning capacity of CNN was achieved by increasing its depth and parameter optimization strategies. This suggested that the depth of a CNN helps in improving the performance of a classifier.随着CNN在计算机视觉(CV)领域的应用越来越广泛,人们在降低计算成本的前提下,对CNN的性能进行了许多尝试。因此,每一个新的架构都试图结合新的结构重组来克服先前提出的建筑的缺点。在第2013和2014年,研究人员主要集中在参数优化,以加速CNN在一系列应用中的性能,计算复杂性的增加很小。2013年,Zeiler和Fergus[28]定义了一种机制,可以可视化每个CNN层的学习过滤器。采用可视化的方法,通过减小滤波器的尺寸来改善特征提取阶段。同样,在2014-ILSVRC竞赛中获得亚军的Oxford group提出的VGG架构[29]也使得接受场比AlexNet小得多,但随着体积的增加。在VGG中,深度从9层增加到16层,使每层的特征地图体积加倍。同年,赢得2014-ILSVRC竞赛的GoogleNet[99]不仅努力通过改变层设计来降低计算成本,还根据深度拓宽了宽度以提高CNN性能。GoogleNet引入了基于分割、变换和合并的块的概念其中结合了多尺度和多级变换来捕获局部和全局信息[33]、[99]、[100]。多级转换的使用有助于CNN处理不同层次的图像细节。2012-2014年,CNN的学习能力主要通过提高其深度和参数优化策略来实现。这表明CNN的深度有助于提高分类器的性能。

 

3.5 2015-Present: Rapid increase in Architectural Innovations and Applications of CNN

       It is generally observed the major improvements in CNN performance occurred from 2015-2019. The research in CNN is still on going and has a significant potential of improvement. Representational capacity of CNN depends on its depth and in a sense can help in learning complex problems by defining diverse level of features ranging from simple to complex. Multiple levels of transformation make learning easy by chopping complex problems into 15 smaller modules. However, the main challenge faced by deep architectures is the problem of negative learning, which occurs due to diminishing gradient at lower layers of the network. To handle this problem, different research groups worked on readjustment of layers connections and design of new modules. In earlier 2015, Srivastava et al. used the concept of cross-channel connectivity and information gating mechanism to solve the vanishing gradient problem and to improve the network representational capacity [101]–[103]. This idea got famous in late 2015 and a similar concept of residual blocks or skip connections was coined [31]. Residual blocks are a variant of cross-channel connectivity, which smoothen learning by regularizing the flow of information across blocks [104]–[106]. This idea was used in ResNet architecture for the training of 150 layers deep network [31]. The idea of cross-channel connectivity is further extended to multilayer connectivity by Deluge, DenseNet, etc. to improve representation [107], [108].

       一般观察到,CNN在2015-2019年的表现出现了重大改善。CNN的研究仍在进行中,有很大的改进潜力。CNN的表征能力取决于它的深度,在某种意义上可以通过定义从简单到复杂的不同层次的特征来帮助学习复杂的问题。通过将复杂的问题分解成15个较小的模块,多层次的转换使学习变得容易。然而,深度架构面临的主要挑战是负学习问题,这是由于网络较低层的梯度减小而产生的。为了解决这个问题,不同的研究小组致力于重新调整层连接和设计新的模块。2015年初,Srivastava等人。利用跨通道连接和信息选通机制的概念解决了消失梯度问题,提高了网络的表示能力[101]–[103]。这一想法在2015年末变得很有名,并创造了类似的剩余块或跳过连接的概念[31]。剩余块是跨信道连接的一种变体,它通过调整跨块的信息流来平滑学习[104]–[106]。该思想被用于ResNet体系结构中,用于150层深度网络的训练[31]。为了改进表示[107]、[108],通过Deluge、DenseNet等将跨信道连接的思想进一步扩展到多层连接。γi

        In the year 2016, the width of the network was also explored in connection with depth to improve feature learning [34], [35]. Apart from this, no new architectural modification became prominent but instead, different researchers used hybrid of the already proposed architectures to improve deep CNN performance [33], [104]–[106], [109], [110]. This fact gave the intuition that there might be other factors more important as compared to the appropriate assembly of the network units that can effectively regulate CNN performance. In this regard, Hu et al. (2017) identified that the network representation has a role in learning of deep CNNs [111]. Hu et al. introduced the idea of feature map exploitation and pinpointed that less informative and domain extraneous features may affect the performance of the network to a larger extent. He exploited the aforementioned idea and proposed new architecture named as Squeeze and Excitation Network (SE-Network) [111]. It exploits feature map (commonly known as channel in literature) information by designing a specialized SE-block. This block assigns weight to each feature map depending upon its contribution in class discrimination. This idea was further investigated by different researchers, which assign attention to important regions by exploiting both spatial and feature map (channel) information [37], [38], [112]. In 2018, a new idea of channel boosting was introduced by Khan et al [36]. The motivation behind the training of network with boosted channel representation was to use an enriched representation. This idea effectively boost the performance of a CNN by learning diverse features as well as exploiting the already learnt features through the concept of TL.

2016年,还结合深度探索了网络的宽度,以改进特征学习[34],[35]。除此之外,没有新的架构修改变得突出,但相反,不同的研究人员使用已经提出的架构的混合来改进深层CNN性能[33]、[104]–[106]、[109]、[110]。这一事实给人的直觉是,与能够有效调节CNN性能的网络单元的适当组装相比,可能还有其他因素更重要。在这方面,胡等人。(2017)确定了网络代表在学习深层CNN方面的作用[111]。Hu等人。介绍了特征图的开发思想,指出信息量小、领域无关的特征对网络性能的影响较大。他利用了上述思想,提出了一种新的结构,称为挤压激励网络(SE网络)[111]。它通过设计一个专门的SE块来开发特征映射(在文献中通常称为通道)信息。此块根据其在类别识别中的贡献为每个特征映射分配权重。不同的研究者对此进行了进一步的研究,他们利用空间和特征地图(通道)信息将注意力分配到重要区域[37]、[38]、[112]。2018年,Khan等人[36]提出了一种新的渠道提升理念。提高渠道表征的网络训练背后的动机是使用丰富的表征。这一思想通过学习不同的特征以及通过TL的概念利用已经学习的特征,有效地提高了CNN的性能
       From 2012 up till now, a lot of improvements have been reported in CNN architecture. As regards the architectural advancement of CNNs, recently the focus of research has been on designing of new blocks that can boost network representation by exploiting both feature maps and spatial information or by adding artificial channels.从2012年到现在,CNN的架构有很多改进。关于CNNs的体系结构进展,近年来的研究重点是设计新的块,通过利用特征图和空间信息或添加人工通道来增强网络表示

 

查看全文
如若内容造成侵权/违法违规/事实不符,请联系编程学习网邮箱:809451989@qq.com进行投诉反馈,一经查实,立即删除!

相关文章

  1. 机器学习--线性代数基础

    数学是计算机技术的基础,线性代数是机器学习和深度学习的基础,了解数据知识最好的方法我觉得是理解概念,数学不只是上学时用来考试的,也是工作中必不可少的基础知识,实际上有很多有趣的数学门类在学校里学不到,有很多拓展类的数据能让我们发散思维,但掌握最基本的数学知…...

    2024/4/17 8:38:44
  2. hibernate5二级缓存配置

    hibernate默认有一个一级缓存,就是默认的Session缓存,当我们做了一次查询,hibernate会把这个结果缓存起来,叫做一级缓存,当我们接着在这个Session会话里面再做一次同样条件的查询,hibernate不会再次查询数据库,而是直接在一级缓存中获取结果并返回。一级缓存是内置的,他…...

    2024/5/8 17:11:16
  3. FPGA最新资料大全

    低信噪比环境下WCDMA小区搜索的FPGA实现基于单片机和FPGA的程控滤波器的设计基于W78E58与FPGA的程控滤波器设计基于FPGA的VGA接口显示技术基于C8051F360和FPGA的高速数据采集系统设计CTI媒体处理器中DTU模块系统FPGA的设计实现和NGN媒体网关技术的研究认知无线电中自适应调制解…...

    2024/5/3 5:15:58
  4. 手把手教你在多种无监督聚类算法实现Python(附代码)

    来源: 机器之心本文约2704字,建议阅读6分钟。本文简要介绍了多种无监督学习算法的 Python 实现,包括 K 均值聚类、层次聚类、t-SNE 聚类、DBSCAN 聚类。无监督学习是一类用于在数据中寻找模式的机器学习技术。无监督学习算法使用的输入数据都是没有标注过的,这意味着数据只…...

    2024/4/17 17:56:47
  5. Improving Deep Neural Network with Multiple Parametric Exponential Linear Units(MPELU译文)

    原文链接:http://xueshu.baidu.com/s?wd=paperuri%3A%2869c0ed187135ff3c1ee7da242d29c834%29&filter=sc_long_sign&tn=SE_xueshusource_2kduw22v&sc_vurl=http%3A%2F%2Farxiv.org%2Fpdf%2F1606.00305&ie=utf-8&sc_us=15963689512684278098 用多参数指数…...

    2024/5/3 2:51:28
  6. 线性代数学习笔记(三)

    A的列空间:column space设Ax=b,以column picture视角看,每一个x,都是A的列的一种线性组合,每种组合均构成一个b。取遍x 得到的所有的b 构成了A的column spaceA的零空间:nullspace设Ax=0,所有的解x 构成的空间,就是A的nullspace. 如果A可逆,那么A的nullspace只包含零向…...

    2024/5/3 2:08:35
  7. 记录我自己对hibernate的一、二级缓存的理解

    Hibernate的一级缓存和二级缓存 记录自己对一级缓存和二级缓存的理解 Hibernate一级缓存一级缓存,又叫做session缓存,是属于单个session的。它是内置的,默认开启的; 主要目的是减少对数据库的访问,从而提高hibernate的执行效率,毕竟hibernate访问数据库这方面较慢(当执行…...

    2024/4/17 8:37:26
  8. 机器学习之数学基础-线性代数python案例(一)

    线性代作为数学的一个分支,主要研究向量空间以及在向量空间之间的线性变换。 机器学习在很大程度上依赖于线性代数,因此需要了解向量和矩阵,以及它们的特性和运作机制 向量 定义 向量是由大小和方向定义的量。例如,火箭的速度是一个三维向量:它的大小是火箭的速度,它的方…...

    2024/4/17 8:37:50
  9. 激活函数的选择

    那么你应该使用哪个激活函数来处理深层神经网络的隐藏层? 虽然你的里程会有所不同,一般 ELU > leaky ReLU(及其变体)> ReLU > tanh > sigmoid。 如果您关心运行时性能,那么您可能喜欢 leaky ReLU超过ELU。 如果你不想调整另一个超参数,你可以使用前面提到的默…...

    2024/4/19 18:16:39
  10. F28335基础例程(全) CCS5.5和CCS6.0,基于手把手教学视频

    F28335的基础学习例程,今天冲C币后一顿下载啊,都分享出来嘿嘿,基本的LED,EPWM和定时器均有,还有Header和common文件,已经用上了,非常实用。 希望大家保存完资源留个言,让我知道你们在。链接:https://pan.baidu.com/s/1sWR6U1TkcSD0bjpfvndwkg 提取码:969e 有不会使用…...

    2024/5/3 4:13:24
  11. Hibernate和Mybatis一级缓存、二级缓存

    Hibernate简介Hibernate是一个持久层框架,经常访问物理数据库。为了降低应用程序对物理数据源访问的频次,从而提高应用程序的运行性能。缓存内的数据是对物理数据源中的数据的复制,应用程序在运行时从缓存读写数据,在特定的时刻或事件会同步缓存和物理数据源的数据。一级缓…...

    2024/5/3 6:42:13
  12. .NET开源工作流CCFlow-快速入门

    关键字驰骋工作流引擎 流程快速开发平台 workflow ccflow jflow .net开源工作流接触过activiti工作流的朋友也许会觉得,设计一个工作流需要写很多代码,还要事先设计好一个表单,这真的不是一时半会能做出来的。下面我用ccflow来设计一个流程,五分钟就可以完成。第一步,需要…...

    2024/4/19 10:00:32
  13. DSP28335 时钟、外设及寄存器配置

    1.时钟产生过程 外部时钟或者外部晶振给F28335提供时钟源OSCCLK,使能F28335片上PLL电路,PLL电路对时钟源信号进行倍频,产生时钟CLKIN,CLKIN通过CPU产生时钟SYSCLKOUT,SYSCLKOUT经过分频可以产生低速时钟LOSPCLK和高速时钟HISPCLK,最后OSCCLK、CLKIN、SYSCLKOUT、LOSPCLK和…...

    2024/4/17 8:40:32
  14. C/C++科学计算库(矩阵,线性代数)

    Matlab是矩阵运算/线性代数最常用的库。不过由于licence问题,一般更多用于做实验什么的,另外,虽然Matlab可以方便的用C写扩展,但是Matlab自己的函数却很难被其他语言调用。 Octave是Matlab一个不错的开源替代,不过Octave也有一些问题,1,Octave并没有实现Matlab太多功能 …...

    2024/5/3 7:17:03
  15. 如何使用U-Net-train进行语义分段,并在Keras中测试您的自定义数据

    什么是语义分割?语义分割是逐像素分类问题陈述。如果到目前为止,您已将图像中的一组像素分类为Cat,Dog,Zebra,Humans等,那么现在是时候学习如何为图像中的每个像素分配类。这可以通过语义分割,Mask-R-CNN等许多算法实现。在本文中,我们将学习使用深度学习模型实现语义分…...

    2024/4/17 8:40:02
  16. 我的营销心得(史玉柱)

    你了解史玉柱吗,知道史玉柱是怎么发展起来的吗,这本书就是一个很好的途径。它介绍了史玉柱创业时各个阶段的经历和思考,全书都是通俗易懂的硬道理,朴实无华,让你对营销本质有更深的认识,屌丝们赶紧学学吧,人家都这么大方的教你还不学! 第一章 脑白金如何找到自己的消费…...

    2024/4/17 8:40:38
  17. Hibernate二级缓存适用场景

    Hibernate二级缓存适用场景1.什么样的数据适合存放到第二级缓存中? 1) 很少被后台修改的数据2) 不是很重要的数据,允许出现偶尔并发的数据 3) 访问量大,不会被并发访问的数据,如个人资料 4) 参考数据,指的是供应用参考的常量数据,它的实例数目有限,它的实例会被许多其他类…...

    2024/5/2 23:48:44
  18. 求2019汤家凤线代笔记pdf版

    @TOC求各位大佬 欢迎使用Markdown编辑器 你好! 这是你第一次使用 Markdown编辑器 所展示的欢迎页。如果你想学习如何使用Markdown编辑器, 可以仔细阅读这篇文章,了解一下Markdown的基本语法知识。 新的改变 我们对Markdown编辑器进行了一些功能拓展与语法支持,除了标准的Mar…...

    2024/4/17 8:39:56
  19. 零基础学习嵌入式入门以及项目实战开发【手把手教+国内独家+原创】

    零基础学习嵌入式入门以及项目实战开发【手把手教+国内独家+原创】独家拥有,绝对经典 创 科 之 龙嵌入式开发经典系列教程【第一期】 主讲人:aiku技术支持QQ:1653687969致力于打造最新!最快!最给力!最强悍!嵌入式入门渠道! 我们的口号是…...

    2024/4/20 4:57:15
  20. 从ReLU到GELU,一文概览神经网络的激活函数

    选自mlfromscratch作者:Casper Hansen机器之心编译在计算每一层的激活值时,我们要用到激活函数,之后才能确定这些激活值究竟是多少。根据每一层前面的激活、权重和偏置,我们要为下一层的每个激活计算一个值。但在将该值发送给下一层之前,我们要使用一个激活函数对这个输出…...

    2024/4/17 8:39:56

最新文章

  1. 回答篇:测试开发高频面试题目

    引用之前文章:《测试开发高频面试题目》 https://blog.csdn.net/qq_41214208/article/details/138193469?spm1001.2014.3001.5502 本篇文章是回答篇(持续更新中) 1. 什么是测试开发以及其在软件开发流程中的作用。 a. 测试开发是指测试人员或…...

    2024/5/9 8:01:43
  2. 梯度消失和梯度爆炸的一些处理方法

    在这里是记录一下梯度消失或梯度爆炸的一些处理技巧。全当学习总结了如有错误还请留言,在此感激不尽。 权重和梯度的更新公式如下: w w − η ⋅ ∇ w w w - \eta \cdot \nabla w ww−η⋅∇w 个人通俗的理解梯度消失就是网络模型在反向求导的时候出…...

    2024/5/7 10:36:02
  3. HiveSQL如何生成连续日期剖析

    HiveSQL如何生成连续日期剖析 情景假设: 有一结果表,表中有start_dt和end_dt两个字段,,想要根据开始和结束时间生成连续日期的多条数据,应该怎么做?直接上结果sql。(为了便于演示和测试这里通过…...

    2024/5/9 7:10:28
  4. 数据挖掘中的PCA和KMeans:Airbnb房源案例研究

    目录 一、PCA简介 二、数据集概览 三、数据预处理步骤 四、PCA申请 五、KMeans 聚类 六、PCA成分分析 七、逆变换 八、质心分析 九、结论 十、深入探究 10.1 第 1 步:确定 PCA 组件的最佳数量 10.2 第 2 步:使用 9 个组件重做 PCA 10.3 解释 PCA 加载和特…...

    2024/5/8 8:18:28
  5. 【蓝桥杯】省模拟赛

    题目 1.奇数次数2.最小步数3.最大极小值和最小极大值 1.奇数次数 问题描述 给定一个仅包含数字字符的字符串,统计一下这个字符串中出现了多少个值为奇数的数位。 输入格式 输入一行包含一个字符串,仅由数字字符组成。 输出格式 输出一行包含一个整数&am…...

    2024/5/7 13:35:55
  6. 【外汇早评】美通胀数据走低,美元调整

    原标题:【外汇早评】美通胀数据走低,美元调整昨日美国方面公布了新一期的核心PCE物价指数数据,同比增长1.6%,低于前值和预期值的1.7%,距离美联储的通胀目标2%继续走低,通胀压力较低,且此前美国一季度GDP初值中的消费部分下滑明显,因此市场对美联储后续更可能降息的政策…...

    2024/5/8 6:01:22
  7. 【原油贵金属周评】原油多头拥挤,价格调整

    原标题:【原油贵金属周评】原油多头拥挤,价格调整本周国际劳动节,我们喜迎四天假期,但是整个金融市场确实流动性充沛,大事频发,各个商品波动剧烈。美国方面,在本周四凌晨公布5月份的利率决议和新闻发布会,维持联邦基金利率在2.25%-2.50%不变,符合市场预期。同时美联储…...

    2024/5/7 9:45:25
  8. 【外汇周评】靓丽非农不及疲软通胀影响

    原标题:【外汇周评】靓丽非农不及疲软通胀影响在刚结束的周五,美国方面公布了新一期的非农就业数据,大幅好于前值和预期,新增就业重新回到20万以上。具体数据: 美国4月非农就业人口变动 26.3万人,预期 19万人,前值 19.6万人。 美国4月失业率 3.6%,预期 3.8%,前值 3…...

    2024/5/4 23:54:56
  9. 【原油贵金属早评】库存继续增加,油价收跌

    原标题:【原油贵金属早评】库存继续增加,油价收跌周三清晨公布美国当周API原油库存数据,上周原油库存增加281万桶至4.692亿桶,增幅超过预期的74.4万桶。且有消息人士称,沙特阿美据悉将于6月向亚洲炼油厂额外出售更多原油,印度炼油商预计将每日获得至多20万桶的额外原油供…...

    2024/5/9 4:20:59
  10. 【外汇早评】日本央行会议纪要不改日元强势

    原标题:【外汇早评】日本央行会议纪要不改日元强势近两日日元大幅走强与近期市场风险情绪上升,避险资金回流日元有关,也与前一段时间的美日贸易谈判给日本缓冲期,日本方面对汇率问题也避免继续贬值有关。虽然今日早间日本央行公布的利率会议纪要仍然是支持宽松政策,但这符…...

    2024/5/4 23:54:56
  11. 【原油贵金属早评】欧佩克稳定市场,填补伊朗问题的影响

    原标题:【原油贵金属早评】欧佩克稳定市场,填补伊朗问题的影响近日伊朗局势升温,导致市场担忧影响原油供给,油价试图反弹。此时OPEC表态稳定市场。据消息人士透露,沙特6月石油出口料将低于700万桶/日,沙特已经收到石油消费国提出的6月份扩大出口的“适度要求”,沙特将满…...

    2024/5/4 23:55:05
  12. 【外汇早评】美欲与伊朗重谈协议

    原标题:【外汇早评】美欲与伊朗重谈协议美国对伊朗的制裁遭到伊朗的抗议,昨日伊朗方面提出将部分退出伊核协议。而此行为又遭到欧洲方面对伊朗的谴责和警告,伊朗外长昨日回应称,欧洲国家履行它们的义务,伊核协议就能保证存续。据传闻伊朗的导弹已经对准了以色列和美国的航…...

    2024/5/4 23:54:56
  13. 【原油贵金属早评】波动率飙升,市场情绪动荡

    原标题:【原油贵金属早评】波动率飙升,市场情绪动荡因中美贸易谈判不安情绪影响,金融市场各资产品种出现明显的波动。随着美国与中方开启第十一轮谈判之际,美国按照既定计划向中国2000亿商品征收25%的关税,市场情绪有所平复,已经开始接受这一事实。虽然波动率-恐慌指数VI…...

    2024/5/7 11:36:39
  14. 【原油贵金属周评】伊朗局势升温,黄金多头跃跃欲试

    原标题:【原油贵金属周评】伊朗局势升温,黄金多头跃跃欲试美国和伊朗的局势继续升温,市场风险情绪上升,避险黄金有向上突破阻力的迹象。原油方面稍显平稳,近期美国和OPEC加大供给及市场需求回落的影响,伊朗局势并未推升油价走强。近期中美贸易谈判摩擦再度升级,美国对中…...

    2024/5/4 23:54:56
  15. 【原油贵金属早评】市场情绪继续恶化,黄金上破

    原标题:【原油贵金属早评】市场情绪继续恶化,黄金上破周初中国针对于美国加征关税的进行的反制措施引发市场情绪的大幅波动,人民币汇率出现大幅的贬值动能,金融市场受到非常明显的冲击。尤其是波动率起来之后,对于股市的表现尤其不安。隔夜美国股市出现明显的下行走势,这…...

    2024/5/6 1:40:42
  16. 【外汇早评】美伊僵持,风险情绪继续升温

    原标题:【外汇早评】美伊僵持,风险情绪继续升温昨日沙特两艘油轮再次发生爆炸事件,导致波斯湾局势进一步恶化,市场担忧美伊可能会出现摩擦生火,避险品种获得支撑,黄金和日元大幅走强。美指受中美贸易问题影响而在低位震荡。继5月12日,四艘商船在阿联酋领海附近的阿曼湾、…...

    2024/5/4 23:54:56
  17. 【原油贵金属早评】贸易冲突导致需求低迷,油价弱势

    原标题:【原油贵金属早评】贸易冲突导致需求低迷,油价弱势近日虽然伊朗局势升温,中东地区几起油船被袭击事件影响,但油价并未走高,而是出于调整结构中。由于市场预期局势失控的可能性较低,而中美贸易问题导致的全球经济衰退风险更大,需求会持续低迷,因此油价调整压力较…...

    2024/5/8 20:48:49
  18. 氧生福地 玩美北湖(上)——为时光守候两千年

    原标题:氧生福地 玩美北湖(上)——为时光守候两千年一次说走就走的旅行,只有一张高铁票的距离~ 所以,湖南郴州,我来了~ 从广州南站出发,一个半小时就到达郴州西站了。在动车上,同时改票的南风兄和我居然被分到了一个车厢,所以一路非常愉快地聊了过来。 挺好,最起…...

    2024/5/7 9:26:26
  19. 氧生福地 玩美北湖(中)——永春梯田里的美与鲜

    原标题:氧生福地 玩美北湖(中)——永春梯田里的美与鲜一觉醒来,因为大家太爱“美”照,在柳毅山庄去寻找龙女而错过了早餐时间。近十点,向导坏坏还是带着饥肠辘辘的我们去吃郴州最富有盛名的“鱼头粉”。说这是“十二分推荐”,到郴州必吃的美食之一。 哇塞!那个味美香甜…...

    2024/5/4 23:54:56
  20. 氧生福地 玩美北湖(下)——奔跑吧骚年!

    原标题:氧生福地 玩美北湖(下)——奔跑吧骚年!让我们红尘做伴 活得潇潇洒洒 策马奔腾共享人世繁华 对酒当歌唱出心中喜悦 轰轰烈烈把握青春年华 让我们红尘做伴 活得潇潇洒洒 策马奔腾共享人世繁华 对酒当歌唱出心中喜悦 轰轰烈烈把握青春年华 啊……啊……啊 两…...

    2024/5/8 19:33:07
  21. 扒开伪装医用面膜,翻六倍价格宰客,小姐姐注意了!

    原标题:扒开伪装医用面膜,翻六倍价格宰客,小姐姐注意了!扒开伪装医用面膜,翻六倍价格宰客!当行业里的某一品项火爆了,就会有很多商家蹭热度,装逼忽悠,最近火爆朋友圈的医用面膜,被沾上了污点,到底怎么回事呢? “比普通面膜安全、效果好!痘痘、痘印、敏感肌都能用…...

    2024/5/5 8:13:33
  22. 「发现」铁皮石斛仙草之神奇功效用于医用面膜

    原标题:「发现」铁皮石斛仙草之神奇功效用于医用面膜丽彦妆铁皮石斛医用面膜|石斛多糖无菌修护补水贴19大优势: 1、铁皮石斛:自唐宋以来,一直被列为皇室贡品,铁皮石斛生于海拔1600米的悬崖峭壁之上,繁殖力差,产量极低,所以古代仅供皇室、贵族享用 2、铁皮石斛自古民间…...

    2024/5/8 20:38:49
  23. 丽彦妆\医用面膜\冷敷贴轻奢医学护肤引导者

    原标题:丽彦妆\医用面膜\冷敷贴轻奢医学护肤引导者【公司简介】 广州华彬企业隶属香港华彬集团有限公司,专注美业21年,其旗下品牌: 「圣茵美」私密荷尔蒙抗衰,产后修复 「圣仪轩」私密荷尔蒙抗衰,产后修复 「花茵莳」私密荷尔蒙抗衰,产后修复 「丽彦妆」专注医学护…...

    2024/5/4 23:54:58
  24. 广州械字号面膜生产厂家OEM/ODM4项须知!

    原标题:广州械字号面膜生产厂家OEM/ODM4项须知!广州械字号面膜生产厂家OEM/ODM流程及注意事项解读: 械字号医用面膜,其实在我国并没有严格的定义,通常我们说的医美面膜指的应该是一种「医用敷料」,也就是说,医用面膜其实算作「医疗器械」的一种,又称「医用冷敷贴」。 …...

    2024/5/9 7:32:17
  25. 械字号医用眼膜缓解用眼过度到底有无作用?

    原标题:械字号医用眼膜缓解用眼过度到底有无作用?医用眼膜/械字号眼膜/医用冷敷眼贴 凝胶层为亲水高分子材料,含70%以上的水分。体表皮肤温度传导到本产品的凝胶层,热量被凝胶内水分子吸收,通过水分的蒸发带走大量的热量,可迅速地降低体表皮肤局部温度,减轻局部皮肤的灼…...

    2024/5/4 23:54:56
  26. 配置失败还原请勿关闭计算机,电脑开机屏幕上面显示,配置失败还原更改 请勿关闭计算机 开不了机 这个问题怎么办...

    解析如下:1、长按电脑电源键直至关机,然后再按一次电源健重启电脑,按F8健进入安全模式2、安全模式下进入Windows系统桌面后,按住“winR”打开运行窗口,输入“services.msc”打开服务设置3、在服务界面,选中…...

    2022/11/19 21:17:18
  27. 错误使用 reshape要执行 RESHAPE,请勿更改元素数目。

    %读入6幅图像(每一幅图像的大小是564*564) f1 imread(WashingtonDC_Band1_564.tif); subplot(3,2,1),imshow(f1); f2 imread(WashingtonDC_Band2_564.tif); subplot(3,2,2),imshow(f2); f3 imread(WashingtonDC_Band3_564.tif); subplot(3,2,3),imsho…...

    2022/11/19 21:17:16
  28. 配置 已完成 请勿关闭计算机,win7系统关机提示“配置Windows Update已完成30%请勿关闭计算机...

    win7系统关机提示“配置Windows Update已完成30%请勿关闭计算机”问题的解决方法在win7系统关机时如果有升级系统的或者其他需要会直接进入一个 等待界面,在等待界面中我们需要等待操作结束才能关机,虽然这比较麻烦,但是对系统进行配置和升级…...

    2022/11/19 21:17:15
  29. 台式电脑显示配置100%请勿关闭计算机,“准备配置windows 请勿关闭计算机”的解决方法...

    有不少用户在重装Win7系统或更新系统后会遇到“准备配置windows,请勿关闭计算机”的提示,要过很久才能进入系统,有的用户甚至几个小时也无法进入,下面就教大家这个问题的解决方法。第一种方法:我们首先在左下角的“开始…...

    2022/11/19 21:17:14
  30. win7 正在配置 请勿关闭计算机,怎么办Win7开机显示正在配置Windows Update请勿关机...

    置信有很多用户都跟小编一样遇到过这样的问题,电脑时发现开机屏幕显现“正在配置Windows Update,请勿关机”(如下图所示),而且还需求等大约5分钟才干进入系统。这是怎样回事呢?一切都是正常操作的,为什么开时机呈现“正…...

    2022/11/19 21:17:13
  31. 准备配置windows 请勿关闭计算机 蓝屏,Win7开机总是出现提示“配置Windows请勿关机”...

    Win7系统开机启动时总是出现“配置Windows请勿关机”的提示,没过几秒后电脑自动重启,每次开机都这样无法进入系统,此时碰到这种现象的用户就可以使用以下5种方法解决问题。方法一:开机按下F8,在出现的Windows高级启动选…...

    2022/11/19 21:17:12
  32. 准备windows请勿关闭计算机要多久,windows10系统提示正在准备windows请勿关闭计算机怎么办...

    有不少windows10系统用户反映说碰到这样一个情况,就是电脑提示正在准备windows请勿关闭计算机,碰到这样的问题该怎么解决呢,现在小编就给大家分享一下windows10系统提示正在准备windows请勿关闭计算机的具体第一种方法:1、2、依次…...

    2022/11/19 21:17:11
  33. 配置 已完成 请勿关闭计算机,win7系统关机提示“配置Windows Update已完成30%请勿关闭计算机”的解决方法...

    今天和大家分享一下win7系统重装了Win7旗舰版系统后,每次关机的时候桌面上都会显示一个“配置Windows Update的界面,提示请勿关闭计算机”,每次停留好几分钟才能正常关机,导致什么情况引起的呢?出现配置Windows Update…...

    2022/11/19 21:17:10
  34. 电脑桌面一直是清理请关闭计算机,windows7一直卡在清理 请勿关闭计算机-win7清理请勿关机,win7配置更新35%不动...

    只能是等着,别无他法。说是卡着如果你看硬盘灯应该在读写。如果从 Win 10 无法正常回滚,只能是考虑备份数据后重装系统了。解决来方案一:管理员运行cmd:net stop WuAuServcd %windir%ren SoftwareDistribution SDoldnet start WuA…...

    2022/11/19 21:17:09
  35. 计算机配置更新不起,电脑提示“配置Windows Update请勿关闭计算机”怎么办?

    原标题:电脑提示“配置Windows Update请勿关闭计算机”怎么办?win7系统中在开机与关闭的时候总是显示“配置windows update请勿关闭计算机”相信有不少朋友都曾遇到过一次两次还能忍但经常遇到就叫人感到心烦了遇到这种问题怎么办呢?一般的方…...

    2022/11/19 21:17:08
  36. 计算机正在配置无法关机,关机提示 windows7 正在配置windows 请勿关闭计算机 ,然后等了一晚上也没有关掉。现在电脑无法正常关机...

    关机提示 windows7 正在配置windows 请勿关闭计算机 ,然后等了一晚上也没有关掉。现在电脑无法正常关机以下文字资料是由(历史新知网www.lishixinzhi.com)小编为大家搜集整理后发布的内容,让我们赶快一起来看一下吧!关机提示 windows7 正在配…...

    2022/11/19 21:17:05
  37. 钉钉提示请勿通过开发者调试模式_钉钉请勿通过开发者调试模式是真的吗好不好用...

    钉钉请勿通过开发者调试模式是真的吗好不好用 更新时间:2020-04-20 22:24:19 浏览次数:729次 区域: 南阳 > 卧龙 列举网提醒您:为保障您的权益,请不要提前支付任何费用! 虚拟位置外设器!!轨迹模拟&虚拟位置外设神器 专业用于:钉钉,外勤365,红圈通,企业微信和…...

    2022/11/19 21:17:05
  38. 配置失败还原请勿关闭计算机怎么办,win7系统出现“配置windows update失败 还原更改 请勿关闭计算机”,长时间没反应,无法进入系统的解决方案...

    前几天班里有位学生电脑(windows 7系统)出问题了,具体表现是开机时一直停留在“配置windows update失败 还原更改 请勿关闭计算机”这个界面,长时间没反应,无法进入系统。这个问题原来帮其他同学也解决过,网上搜了不少资料&#x…...

    2022/11/19 21:17:04
  39. 一个电脑无法关闭计算机你应该怎么办,电脑显示“清理请勿关闭计算机”怎么办?...

    本文为你提供了3个有效解决电脑显示“清理请勿关闭计算机”问题的方法,并在最后教给你1种保护系统安全的好方法,一起来看看!电脑出现“清理请勿关闭计算机”在Windows 7(SP1)和Windows Server 2008 R2 SP1中,添加了1个新功能在“磁…...

    2022/11/19 21:17:03
  40. 请勿关闭计算机还原更改要多久,电脑显示:配置windows更新失败,正在还原更改,请勿关闭计算机怎么办...

    许多用户在长期不使用电脑的时候,开启电脑发现电脑显示:配置windows更新失败,正在还原更改,请勿关闭计算机。。.这要怎么办呢?下面小编就带着大家一起看看吧!如果能够正常进入系统,建议您暂时移…...

    2022/11/19 21:17:02
  41. 还原更改请勿关闭计算机 要多久,配置windows update失败 还原更改 请勿关闭计算机,电脑开机后一直显示以...

    配置windows update失败 还原更改 请勿关闭计算机,电脑开机后一直显示以以下文字资料是由(历史新知网www.lishixinzhi.com)小编为大家搜集整理后发布的内容,让我们赶快一起来看一下吧!配置windows update失败 还原更改 请勿关闭计算机&#x…...

    2022/11/19 21:17:01
  42. 电脑配置中请勿关闭计算机怎么办,准备配置windows请勿关闭计算机一直显示怎么办【图解】...

    不知道大家有没有遇到过这样的一个问题,就是我们的win7系统在关机的时候,总是喜欢显示“准备配置windows,请勿关机”这样的一个页面,没有什么大碍,但是如果一直等着的话就要两个小时甚至更久都关不了机,非常…...

    2022/11/19 21:17:00
  43. 正在准备配置请勿关闭计算机,正在准备配置windows请勿关闭计算机时间长了解决教程...

    当电脑出现正在准备配置windows请勿关闭计算机时,一般是您正对windows进行升级,但是这个要是长时间没有反应,我们不能再傻等下去了。可能是电脑出了别的问题了,来看看教程的说法。正在准备配置windows请勿关闭计算机时间长了方法一…...

    2022/11/19 21:16:59
  44. 配置失败还原请勿关闭计算机,配置Windows Update失败,还原更改请勿关闭计算机...

    我们使用电脑的过程中有时会遇到这种情况,当我们打开电脑之后,发现一直停留在一个界面:“配置Windows Update失败,还原更改请勿关闭计算机”,等了许久还是无法进入系统。如果我们遇到此类问题应该如何解决呢&#xff0…...

    2022/11/19 21:16:58
  45. 如何在iPhone上关闭“请勿打扰”

    Apple’s “Do Not Disturb While Driving” is a potentially lifesaving iPhone feature, but it doesn’t always turn on automatically at the appropriate time. For example, you might be a passenger in a moving car, but your iPhone may think you’re the one dri…...

    2022/11/19 21:16:57