论文题目:Machine learning: Trends,perspectives, and prospects
论文来源:Machine learning: Trends,perspectives, and prospects_2015_Science
翻译人:BDML@CQUT实验室

Machine learning:Trends, perspectives, and prospects

M. I. Jordan1* and T. M. Mitchell2*

机器学习:趋势、观点、前景

M. I. Jordan1* and T. M. Mitchell2*

Abstract

Machine learning addresses the question of how to build computers that improve automatically through experience. It is one of today’s most rapidly growing technical fields, lying at the intersection of computer science and statistics, and at the core of artificial intelligence and data science. Recent progress in machine learning has been driven both by the development of new learningalgorithms and theory and by the ongoing explosion in the availability of online data and low-cost computation.The adoption of data-intensive machine-learning methods can be found throughout science, technology and commerce, leading to more evidence-based decision-making across many walks of life, including health care, manufacturing, education, financial modeling, policing, and marketing.

摘要

机器学习解决的问题是如何建立通过经验自动改进的计算机。它是当今发展最快的技术领域之一,位于计算机科学和统计学的交叉点,是人工智能和数据科学的核心。机器学习的最新进展是由新的学习算法和理论的发展以及在线数据和低成本计算的持续爆炸所推动的。数据密集型机器学习方法的采用可以在科学、技术和商业中找到,从而导致在许多行业,包括医疗保健、制造业、教育、金融建模、警察和市场营销等领域进行更多基于证据的决策。

正文

Machine learning is a discipline focused on two interrelated questions: How can one construct computer systems that automatically improve through experience? and What are the fundamental statistical-computational-information theoretic laws that govern all learning systems, including computers, humans, and organizations? The study of machine learning is important both for addressing these fundamental scientific and engineering questions and for the highly practical computer soft-ware it has produced and fielded across many applications.

Machine learning has progressed dramatically over the past two decades, from laboratory curiosity to a practical technology in widespread commercial use. Within artificial intelligence (AI), machine learning has emerged as the method of choice for developing practical software for computer vision, speech recognition, natural language processing, robot control, and other applications. Many developers of AI systems now recognize that, for many applications, it can be far easier to train a system by showing it examples of desired input-output behavior than to program it manually by anticipating the desired response for all possible inputs. The effect of machine learning has also been felt broadly across computer science and across a range of industries concerned with data-intensive issues, such as consumer services, the diagnosis of faults in complex systems, and the control of logistics chains. There has been a similarly broad range of effects across empirical sciences, from biology to cosmology to social science, as machine-learning methods have been developed to analyze highthroughput experimental data in novel ways. See Fig. 1 for a depiction of some recent areas of application of machine learning.

机器学习是一门专注于两个相互关联的问题的学科:如何构建一个通过经验自动改进的计算机系统?统计计算信息论的基本定律是什么,支配着所有的学习系统,包括计算机、人类和组织?机器学习的研究对于解决这些基本的科学和工程问题以及高度实用的计算机软件具有重要的意义。

机器学习在过去的二十年里取得了巨大的进步,从实验室的好奇心到广泛商业应用的实用技术。在人工智能(AI)中,机器学习已经成为开发计算机视觉、语音识别、自然语言处理、机器人控制和其他应用的实用软件的首选方法。许多人工智能系统的开发人员现在认识到,对于许多应用来说,通过展示期望的输入-输出行为的例子来训练一个系统要比通过预测所有可能输入的期望响应来手动编程要容易得多。机器学习的影响在计算机科学和涉及数据密集型问题的一系列行业中也得到广泛的感受,如消费者服务、复杂系统故障诊断和物流链控制。在从生物学到宇宙学到社会科学的经验科学中也有类似广泛的影响,因为机器学习方法已经被开发出来以新的方式分析高通量的实验数据。有关机器学习的一些最新应用领域的描述,请参见图1。
在这里插入图片描述
Fig. 1. Applications of machine learning. Machine learning is having a substantial effect on many areas of technology and science; examples of recent applied success stories include robotics and autonomous vehicle control (top left), speech processing and natural language processing (top right), neuroscience research (middle), and applications in computer vision (bottom).

图1.机器学习的应用。机器学习对许多技术和科学领域产生了重大影响;最近的应用成功案例包括机器人和自动车辆控制(左上)、语音处理和自然语言处理(右上)、神经科学研究(中)和计算机视觉应用(下)。

A learning problem can be defined as the problem of improving some measure of performance when executing some task, through some type of training experience. For example, in learning to detect credit-card fraud, the task is to assign a label of “fraud” or “not fraud” to any given credit-card transaction. The performance metric to be improved might be the accuracy of this fraud classifier, and the training experience might consist of a collection of historical credit-card transactions, each labeled in retrospect as fraudulent or not. Alternatively, one might define a different performance metric that assigns a higher penalty when “fraud” is labeled “not fraud” than when “not fraud” is incorrectly labeled “fraud.” One might also define a different type of training experience—for example, by including unlabeled credit-card transactions along with labeled examples.

A diverse array of machine-learning algorithms has been developed to cover the wide variety of data and problem types exhibited across different machine-learning problems. Conceptually, machine-learning algorithms can be viewed as searching through a large space of candidate programs, guided by training experience, to find a program that optimizes the performance metric. Machine-learning algorithms vary greatly, in part by the way in which they represent candidate programs (e.g., decision trees, mathematical functions, and generalprogramming languages) and in part by the way in which they search through this space of programs (e.g., optimization algorithms with well-understood convergence guarantees and evolutionary search methods that evaluate successive generations of randomly mutated programs). Here, we focus on approaches that have been particularly successful to date.

Many algorithms focus on function approximation problems, where the task is embodied in a function (e.g., given an input transaction, output a “fraud” or “not fraud” label), and the learning problem is to improve the accuracy of that function, with experience consisting of a sample of known input-output pairs of the function. In some cases, the function is represented explicitly as a parameterized functional form; in other cases, the function is implicit and obtained via a search process, a factorization, an optimization procedure, or a simulation-based procedure. Even when implicit, the function generally depends on parameters or other tunable degrees of freedom, and training corresponds to finding values for these parameters that optimize the performance metric.

Whatever the learning algorithm, a key scientific and practical goal is to theoretically characterize the capabilities of specific learning algorithms and the inherent difficulty of any given learning problem: How accurately can the algorithm learn from a particular type and volume of training data? How robust is the algorithm to errors in its modeling assumptions or to errors in the training data? Given a learning problem with a given volume of training data, is it possible to design a successful algorithm or is this learning problem fundamentally intractable? Such theoretical characterizations of machine-learning algorithms and problems typically makeuse of thefamiliar frameworks of statistical decision theory and computational complexity theory. In fact, attempts to characterize machine-learning algorithms theoretically have led to blends of statistical and computationaltheory in whichthe goalis to simultaneously characterize the sample complexity (how much data are required to learn accurately) and the computational complexity (how much computation is required) and to specify how these depend on features of the learning algorithm such as the representation it uses for what it learns . A specific form of computational analysis that has proved particularly useful in recent years has been that of optimization theory, with upper and lower bounds on rates of convergence of optimization procedures merging well with the formulation of machine-learning problems as the optimization of a performance metric .

As a field of study, machine learning sits at the crossroads of computer science, statistics and a variety of other disciplines concerned with automatic improvement over time, and inference and decision-making under uncertainty. Related disciplines include the psychological study of human learning, the study of evolution, adaptive control theory, the study of educational practices, neuroscience, organizational behavior, and economics. Although the past decade has seen increased crosstalk with these other fields, we are just beginning to tap the potential synergies and the diversity of formalisms and experimental methods used across these multiple fields for studying systems that improve with experience.

学习问题可以定义为在执行某项任务时,通过某种类型的培训经验来提高某种绩效的问题。例如,在学习检测信用卡欺诈时,任务是给任何给定的信用卡交易指定一个“欺诈”或“不欺诈”的标签。需要改进的性能指标可能是该欺诈分类器的准确性,而培训经验可能包括一组历史信用卡交易,每一笔交易回顾起来都被标记为欺诈或不欺诈。或者,可以定义一个不同的绩效指标,当“欺诈”被标记为“非欺诈”时,会比“不欺诈”被错误地标记为“欺诈”时分配更高的惩罚。也可以定义不同类型的培训经验,例如,通过包括未标记的信用卡交易和有标签的例子。

已经开发了一系列不同的机器学习算法,以涵盖不同机器学习问题中显示的各种数据和问题类型。从概念上讲,机器学习算法可以看作是在训练经验的指导下,在大量候选程序空间中搜索,找到一个优化性能指标的程序。机器学习算法变化很大,部分原因在于它们表示候选程序的方式(例如,决策树、数学函数和通用编程语言),而在程序空间中,机器学习算法的变化则取决于程序空间(例如,具有充分理解的收敛保证的优化算法和进化搜索方法,用于评估连续几代的随机变异程序。在这里,我们关注迄今为止特别成功的方法。

许多算法侧重于函数逼近问题,其中任务体现在函数中(例如,给定一个输入事务,输出一个“欺诈”或“不欺诈”标签),学习问题是提高该函数的准确性,经验包括函数的已知输入输出对的样本。在某些情况下,函数显式地表示为参数化函数形式;在另一些情况下,函数是隐式的,并通过搜索过程、因子分解和优化获得. 程序或基于仿真的过程。即使是隐式的,该函数通常依赖于参数或其他可调自由度,训练对应于为这些参数寻找优化性能度量的值。

无论学习算法是什么,一个关键的科学和实用的目标是从理论上描述特定学习算法的能力和任何给定学习问题的固有困难:算法如何准确地从特定类型和数量的训练数据中学习?该算法对其建模假设中的错误或对训练数据中的错误的鲁棒性如何?给定一个学习问题,在给定的训练数据量下,有没有可能设计出一个成功的算法,或者这个学习问题从根本上说是难以解决的?这种机器学习算法和问题的理论描述通常使用统计决策理论和计算复杂性理论的常见框架。事实上,从理论上描述机器学习算法的尝试已经导致了统计和计算的混合,而目标则可以同时表征样本复杂性(准确学习需要多少数据)和计算复杂性(计算量并指定这些如何依赖于学习算法的特性,例如它所学内容的表示。近年来被证明特别有用的计算分析的一种具体形式是优化理论,优化过程的收敛速度的上下界与机器学习问题的表述很好地结合在一起,作为性能指标的优化。

作为一个研究领域,机器学习处于计算机科学、统计学和其他各种学科的十字路口,这些学科涉及随着时间的推移自动改进,以及在不确定性下的推理和决策。相关学科包括人类学习的心理学研究、进化研究、适应性控制理论、教育实践研究、神经科学、组织行为学和经济学。尽管在过去的十年中,与这些其他领域的串扰有所增加,但我们才刚刚开始挖掘潜在的协同效应,以及在这些多个领域使用的形式和实验方法的多样性,以研究随着经验而改进的系统。

Drivers of machine-learning progress

机器学习进程的驱动因素

The past decade has seen rapid growth in the ability of networked and mobile computing systems to gather and transport vast amounts of data, a phenomenon often referred to as “Big Data.” The scientists and engineers who collect such data have often turned to machine learning for solutions to the problem of obtaining useful insights, predictions, and decisions from such data sets. Indeed, the sheer size of the data makes it essential to develop scalable procedures that blend computational and statistical considerations, but the issue is more than the mere size of modern data sets; it is the granular, personalized nature of much of these data. Mobile devices and embedded computing permit large amounts of data to be gathered about individual humans, and machine-learning algorithms can learn from these data to customize their services to the needs and circumstances of each individual. Moreover, these personalized services can be connected, so that an overall service emerges that takes advantage of the wealth and diversity of data from many individuals while still customizing to the needs and circumstances of each. Instances of this trend toward capturing and mining large quantities of data to improve services and productivity can be found across many fields of commerce, science, and government. Historical medical records are used to discover which patients will respond best to which treatments; historical traffic data are used to improve traffic control and reduce congestion; historical crime data are used to help allocate local police to specific locations at specific times; and large experimental data sets are captured and curated to accelerate progress in biology, astronomy, neuroscience, and other dataintensive empirical sciences. We appear to be at the beginning of a decades-long trend toward increasingly data-intensive, evidence-based decisionmaking across many aspects of science, commerce, and government.

With the increasing prominence of large-scale data in all areas of human endeavor has come a wave of new demands on the underlying machinelearning algorithms. For example, huge data sets require computationally tractable algorithms, highly personal data raise the need for algorithms that minimize privacy effects, and the availability of huge quantities of unlabeled data raises the challenge of designing learning algorithms to take advantage of it. The next sections survey some of the effects of these demands on recent work in machine-learning algorithms, theory, and practice.

在过去的十年里,网络化和移动计算系统收集和传输大量数据的能力迅速增长,这一现象通常被称为“大数据”。收集这些数据的科学家和工程师经常求助于机器学习来解决获取有用见解的问题,这些数据集的预测和决策。事实上,数据的巨大规模使得开发混合计算和统计的可伸缩程序变得至关重要考虑因素,但问题不仅仅是现代数据集的大小,而是这些数据的粒度和个性化特性。移动设备和嵌入式计算允许收集大量关于人类个体的数据,而机器学习算法可以从这些数据中学习,以根据每个人的需要和环境定制他们的服务。此外,这些个性化服务可以连接起来,这样就形成了一个整体服务,它利用了来自许多个人的丰富和多样的数据,同时仍然可以根据每个人的需要和情况进行定制。捕捉和挖掘大量数据以提高服务和生产率的趋势在商业、科学和政府的许多领域都可以找到。历史医疗记录用于发现哪些患者对哪种治疗反应最好;历史交通数据用于改善交通管制和减少拥堵;历史犯罪数据用于帮助在特定时间将当地警察分配到特定地点;大量的实验数据集被捕获和管理,以加速生物学、天文学、神经科学和其他数据密集型经验科学的发展。我们似乎正处于一个长达数十年的趋势的开端,这一趋势在科学、商业和政府的许多方面越来越依赖于数据、基于证据的决策。

随着大规模数据在人类各个领域的日益突出,对底层机器学习算法提出了新的要求。例如,巨大的数据集需要计算上可处理的算法,高度个人数据增加了对最小化隐私影响的算法的需求,大量未标记数据的可用性提出了设计学习算法以利用它的挑战。接下来的部分将介绍这些需求对机器学习算法、理论和实践的最新工作的一些影响。

Core methods and recent progress

核心方法和最新进展

The most widely used machine-learning methods are supervised learning methods . Supervised learning systems, including spam classifiers of e-mail, face recognizers over images, and medical diagnosis systems for patients, all exemplify the function approximation problem discussed earlier, where the training data take the form of a collection of (x, y) pairs and the goal is to produce a prediction y* in response to a query x*. The inputs x may be classical vectors or they may be more complex objects such as documents, images, DNA sequences, or graphs. Similarly, manydifferentkindsofoutput yhavebeenstudied. Much progress has been made by focusing on the simple binary classification problem in which y takes on one of two values (for example, “spam” or “not spam”), but there has also been abundant research on problems such as multiclass classification (where y takes on one of K labels), multilabel classification (where y is labeled simultaneously by several of the K labels), ranking problems (where y provides a partial order on some set), and general structured prediction problems (where y is a combinatorial object such as a graph, whose components may be required to satisfy some set of constraints). An example of the latter problem is part-of-speech tagging, where the goal is to simultaneously label every word in an input sentence x as being a noun, verb, or some other part of speech. Supervised learning also includes cases in which y has realvalued components or a mixture of discrete and real-valued components.

Supervised learning systems generally form their predictions via a learned mapping f(x), which produces an output y for each input x (or a probability distribution over y given x). Many different forms of mapping f exist, including
decision trees, decision forests, logistic regression, support vector machines, neural networks, kernel machines, and Bayesian classifiers . A variety of learning algorithms has been proposed to estimate these different types of mappings, and there are also generic procedures such as boosting and multiple kernel learning that combine the outputs of multiple learning algorithms. Procedures for learning f from data often make use of ideas from optimization theory or numerical analysis, with the specific form of machinelearning problems (e.g., that the objective function or function to be integrated is often the sum over a large number of terms) driving innovations. This diversity of learning architectures and algorithms reflects the diverse needs of applications, with different architectures capturing different kinds of mathematical structures, offering different levels of amenability to post-hoc visualization and explanation, and providing varying trade-offs between computational complexity, the amount of data, and performance.

One high-impact area of progress in supervised learning in recent years involves deep networks, which are multilayer networks of threshold units, each of which computes some simple parameterized functionof itsinputs.Deeplearning systems make use of gradient-based optimization algorithms to adjust parameters throughout such a multilayered network based on errors at its output. Exploiting modern parallel computing architectures, such as graphics processing units originally developed for video gaming, it has been possible to build deep learning systems that contain billions of parameters and that can be trained on the very large collections of images, videos, and speech samples available on the Internet. Such large-scale deep learning systems have had a major effect in recent years in computer vision and speech recognition , where they have yielded major improvements in performance over previous approaches (see Fig. 2). Deep network methods are being actively pursued in a variety of additional applications from natural language translation to collaborative filtering.

最广泛使用的机器学习方法是监督学习方法。有监督的学习系统,包括电子邮件的垃圾邮件分类器、基于图像的人脸识别器和针对患者的医疗诊断系统,都是前面讨论过的函数近似问题的例子,其中训练数据采用(x,y)对的集合形式,目标是生成一个预测y来响应查询x。输入x可以是经典的向量,也可以是更复杂的对象,如文档、图像、DNA序列或图形。类似地,研究了许多不同类型的输出。在简单的二元分类问题中,y取两个值中的一个(例如,“垃圾邮件”或“非垃圾邮件”)已经取得了很大的进展,但是对于诸如多类分类(y取K个标签中的一个)的问题也有大量的研究,多标签分类(其中y由多个K标签同时标记)、排序问题(其中y在某个集合上提供偏序)和一般结构化预测问题(其中y是组合对象,例如图,其组件可能需要满足某些约束集)。后一个问题的一个例子是词性标注,目标是同时将输入句子x中的每个单词标记为名词、动词或其他词性。监督学习还包括y具有实值分量或离散分量与实值分量的混合的情况。

有监督学习系统通常通过学习映射f(x)形成预测,f(x)为每个输入x产生一个输出y(或y给定x上的概率分布)。存在许多不同形式的映射,包括决策树、决策森林、物流回归、支持向量机、神经网络、核机器和贝叶斯分类器。人们已经提出了各种学习算法来估计这些不同类型的映射,还有一些通用的过程,如提高和多核学习,它们结合了多种学习算法的输出。从数据中学习f的过程通常利用最优化理论或数值分析的思想,以机器学习问题的具体形式(例如,目标函数或待集成的函数通常是大量术语的总和)推动创新。学习体系结构和算法的多样性反映了应用程序的不同需求,不同的体系结构捕获了不同种类的数学结构,为事后可视化和解释提供了不同程度的适应性,并在计算复杂度之间提供了不同的权衡,数据量和性能。

近年来,监督学习的一个重要进展领域涉及深度网络,它是由阈值单元组成的多层网络,每个网络都计算其输入的一些简单的参数化函数。深度学习系统利用基于梯度的优化算法,根据输出误差在多层网络中调整参数。利用现代并行计算体系结构,如最初为视频游戏开发的图形处理单元,可以构建包含数十亿参数的深度学习系统,这些系统可以对互联网上大量的图像、视频和语音样本进行训练。近年来,这种大规模的深度学习系统在计算机视觉和语音识别方面产生了重大影响,与以前的方法相比,它们在性能上取得了重大改进(见图2)。从自然语言翻译到协作过滤,深层网络方法正被广泛应用。

在这里插入图片描述
Fig. 2. Automatic generation of text captions for images with deep networks. A convolutional neural network is trained to interpret images, and its output is then used by a recurrent neural network trained to generate a text caption (top).The sequence at the bottom shows the word-by-word focus of the network on different parts of input image while it generates the caption word-by-word. [Adapted with permission from ]

图2.使用深度网络自动生成图像的文本标题。训练一个卷积神经网络来解释图像,然后它的输出被训练成生成文本标题(top)的递归神经网络使用。底部的序列显示了网络对输入图像不同部分的逐字焦点,同时逐字生成标题。

The internal layers of deep networks can be viewed as providing learned representations of the input data. While much of the practical success in deep learning has come from supervised learning methods for discovering such representations, efforts have also been made to develop deep learning algorithms that discover useful representations of the input without the need for labeled training data . The general problem is referred to as unsupervised learning, a second paradigm in machine-learning research .

Broadly, unsupervised learning generally involves the analysis of unlabeled data under assumptions about structural properties of the data (e.g., algebraic, combinatorial, or probabilistic). For example, one can assume that data lie on a low-dimensional manifold and aim to identify that manifold explicitly from data. Dimension reduction methods—including principal components analysis, manifold learning, factor analysis, random projections, and autoencoders —make different specific assumptions regarding the underlying manifold (e.g., that it is a linear subspace, a smooth nonlinear manifold, or a collection of submanifolds). Another example of dimension reduction is the topic modeling framework depicted in Fig. 3. A criterion function is defined that embodies these assumptions—often making use of general statistical principles such as maximum likelihood, the method of moments, or Bayesian integration—and optimization or sampling algorithms are developed to optimize the criterion. As another example, clustering is the problem of finding a partition of the observed data (and a rule for predicting future data) in the absence of explicit labels indicating a desired partition. A wide range of clustering procedures has been developed, all based on specific assumptions regarding the nature of a “cluster.” In both clustering and dimension reduction, the concern with computational complexity is paramount, given that the goal is to exploit the particularly large data sets that are available if one dispenses with supervised labels.

深层网络的内部层可以看作是提供输入数据的学习表示。虽然深度学习的许多实际成功来自于发现这种表示的监督学习方法,但也努力开发深度学习算法,以发现输入的有用表示,而不需要标记的训练数据。一般的问题被称为无监督学习,这是机器学习研究的第二种范式。

广义地说,无监督学习通常涉及在假设数据的结构属性(例如代数、组合或概率)下对未标记数据进行分析。例如,我们可以假设数据位于低维流形上,目的是从数据中显式地识别该流形。降维方法包括主成分分析、流形学习、因子分析、随机投影和自动编码器-对底层流形(例如,它是线性子空间、光滑非线性流形或子流形集合)做出不同的具体假设。降维的另一个例子是图3所示的主题建模框架。定义了一个包含这些假设的准则函数,通常利用一般统计原理,如最大似然法、矩量法或贝叶斯积分,并开发优化或抽样算法来优化准则。另一个例子是,聚类是在没有表示所需分区的显式标签的情况下,找到观测数据的分区(以及预测未来数据的规则)的问题。已经开发了一系列的聚类程序,所有这些都是基于关于“聚类”性质的特定假设。在聚类和降维中,对计算复杂性的关注是最重要的,考虑到目标是利用特别大的数据集,如果不使用监督标签,这些数据集是可用的。
在这里插入图片描述
Fig. 3.Topic models.Topic modeling is a methodology for analyzing documents, where a document is viewed as a collection of words, and the words in the document are viewed as being generated by an underlying set of topics (denoted by the colors in the figure). Topics are probability distributions across words (leftmost column), and each document is characterized by a probability distribution across topics (histogram). These distributions are inferred based on the analysis of a collection of documents and can be viewed to classify, index, and summarize the content of documents. [From (31). Copyright 2012, Association for Computing Machinery, Inc. Reprinted with permission]

图3.电视模型。主题建模是一种分析文档的方法,其中文档被视为一组单词,文档中的单词被视为由一组底层主题(用图中的颜色表示)生成的。主题是单词之间的概率分布(最左边的一列),每个文档的特点是主题之间的概率分布(直方图)。这些分布是根据对文档集合的分析推断出来的,可以查看这些分布来对文档的内容进行分类、索引和汇总。

A third major machine-learning paradigm is reinforcement learning . Here, the information available in the training data is intermediate between supervised and unsupervised learning. Instead of training examples that indicate the correct output for a given input, the training data in reinforcement learning are assumed to provide only an indication as to whether an action is correct or not; if an action is incorrect, there remains the problem of finding the correct action. More generally, in the setting of sequences of inputs, it is assumed that reward signals refer to the entire sequence; the assignment of creditor blame to in dividual actionsin the sequence is not directly provided. Indeed, although simplified versions of reinforcement learning known as bandit problems are studied, where it is assumed that rewards are provided after each action, reinforcement learning problems typically involve a general control-theoretic setting in which the learning task is to learn a control strategy(a “policy”) for anagentacting inanunknown dynamical environment, where that learned strategy is trained to chose actions for any given state, with the objective of maximizing its expected reward over time. The ties to research in control theory and operations research have increased over the years, with formulations such as Markov decision processes and partially observed Markov decision processes providing points of contact . Reinforcement-learning algorithms generally make use of ideas that are familiar from the control-theory literature, such as policy iteration, value iteration, rollouts, and variance reduction, with innovations arising to address the specific needs of machine learning (e.g., largescale problems, few assumptions about the unknown dynamical environment, and the use of supervised learning architectures to represent policies). It is also worth noting the strong ties between reinforcement learning and many decades of work on learning in psychology and neuroscience, one notable example being the use of reinforcement learning algorithms to predict the response of dopaminergic neurons in monkeys learning to associate a stimulus light with subsequent sugar reward .

Although these three learning paradigms help to organize ideas, much current research involves blends across these categories. For example, semisupervised learning makes use of unlabeled data to augment labeled data in a supervised learning context, and discriminative training blends architectures developed for unsupervised learning with optimization formulations that make use of labels. Model selection is the broad activity of using training data not only to fit a model but also to select from a family of models, and the fact that training data do not directly indicate which model to use leads to the use of algorithms developed for bandit problems and to Bayesian optimization procedures. Active learning arises when the learner is allowed to choose data points and query the trainer to request targeted information, such as the label of an otherwise unlabeled example. Causal modeling is the effort to go beyond simply discovering predictive relations among variables, to distinguish which variables causally influence others (e.g., a high white-blood-cell count can predict the existence of an infection, but it is the infection that causes the high white-cell count). Many issues influence the design of learning algorithms across all of these paradigms, including whether data are available in batches or arrive sequentially over time, how data have been sampled, requirements that learned models be interpretable by users, and robustness issues that arise when data do not fit prior modeling assumptions.

第三个主要的机器学习范式是强化学习。在这里,训练数据中可用的信息是有监督学习和无监督学习之间的中间信息。强化学习中的训练数据不是指示给定输入的正确输出的训练示例,而是假设只提供一个动作是否正确的指示;如果一个动作不正确,则仍然存在找到正确动作的问题。更一般地说,在输入序列的设置中,假定奖励信号指的是整个序列;在序列中,对单个行为的信任或责备并不直接提供。学习后的问题通常被称为“学习策略”的“强化学习”,其中“学习问题”通常被称为“强化学习策略”一种未知的动态环境,在这种环境中,学习到的策略被训练为针对任何给定状态选择动作,目标是随着时间的推移使其预期回报最大化。近年来,控制理论和运筹学研究的联系日益密切,马尔可夫决策过程和部分观察的马尔可夫决策过程等公式提供了联系点。强化学习算法通常使用控制理论文献中熟悉的思想,如策略迭代、值迭代、展开和方差缩减,并通过创新来解决机器学习的特定需求(例如,大规模问题,很少假设未知的动态环境,使用有监督的学习架构来表示策略)。同样值得注意的是,强化学习与心理学和神经科学数十年的学习工作之间有着密切的联系,其中一个显著的例子是使用强化学习算法来预测猴子学习将刺激光与随后的糖奖赏联系起来的多巴胺能神经元的反应。

尽管这三种学习范式有助于组织思想,但目前的许多研究涉及到这些范畴的融合。例如,半监督学习利用未标记的数据在有监督的学习环境中增加有标记的数据,而区分训练将为无监督学习开发的体系结构与利用标签的优化公式相结合。模型选择是一种广泛的活动,它不仅使用训练数据来拟合一个模型,而且还可以从一系列模型中进行选择,而且训练数据并不直接表明这一点使用哪种模型会导致使用为bandit问题开发的算法和贝叶斯优化过程。当学习者被允许选择数据点并询问培训师以请求有针对性的信息时,就会产生主动学习,例如未标记示例的标签。因果建模不仅仅是简单地发现变量之间的预测关系,而是要区分哪些变量会对其他变量产生因果影响(例如,高白细胞计数可以预测是否存在感染,但导致高白细胞计数的是感染)。许多问题影响了学习算法在所有这些范例中的设计,包括数据是成批可用还是随时间顺序到达,数据是如何采样的,学习模型的用户可解释的要求,以及当数据不符合先前的建模假设时出现的健壮性问题。

Emerging trends

新兴趋势

Thefieldofmachinelearningissufficientlyyoung that it is still rapidly expanding, often by inventing new formalizations of machine-learning problems driven by practical applications. (An example is the development of recommendation systems, as described in Fig. 4.) One major trend driving thisexpansionisagrowing concern with the environment in which a machine-learning algorithm operates. The word “environment” here refers in part to the computing architecture; whereas a classical machine-learning system involved a single program running on a single machine, it is now common for machine-learning systems to be deployed in architectures that include many thousands or ten of thousands of processors,suchthatcommunicationconstraints and issues of parallelism and distributed processing take center stage. Indeed, as depicted in Fig. 5, machine-learning systems are increasingly taking the form of complex collections of software that run on large-scale parallel and distributedcomputingplatformsand providearange of algorithms and services to data analysts.

机器学习的领域相当年轻,以至于它仍在迅速扩展,通常是通过发明由实际应用驱动的机器学习问题的新形式化。(一个例子是推荐系统的开发,如图4所述)推动这种扩展的一个主要趋势是对机器学习算法运行环境的日益关注。这里的“环境”一词在一定程度上指的是计算体系结构;而经典的机器学习系统涉及在一台机器上运行的单个程序,而现在机器学习系统通常部署在包括成千上万个处理器的体系结构中,因此,通信约束、并行性和分布式处理问题占据了中心地位。实际上,如图5所示,机器学习系统正越来越多地采取复杂的软件集合的形式,这些软件在大规模并行和分布式计算平台上运行,并为数据分析师提供一系列算法和服务。

在这里插入图片描述
Fig. 4. Recommendation systems. A recommendation system is a machine-learning system that is based on data that indicate links between a set of a users (e.g., people) and a set of items (e.g., products). A link between a user and a product means that the user has indicated an interest in the product in some fashion (perhaps by purchasing that item in the past).Themachine-learningproblemisto suggest other itemsto agiven user thathe or she may also be interested in, based on the data across all users.

图4.推荐系统。推荐系统是一种机器学习系统,它基于一组用户(例如人)和一组项目(例如产品)之间的联系的数据。用户和产品之间的链接意味着用户以某种方式(也许是通过过去购买该产品)表示了对该产品的兴趣。机器学习问题是根据所有用户的数据,向给定用户建议他或她可能也感兴趣的其他项目。

在这里插入图片描述
Fig. 5. Data analytics stack. Scalable machine-learning systems are layered architectures that are built on parallel and distributed computing platforms. The architecture depicted here—an opensource data analysis stack developed in the Algorithms, Machines and People (AMP) Laboratory at the University of California, Berkeley—includes layers that interface to underlying operating systems; layers that provide distributed storage, data management, and processing; and layers that provide core machine-learning competencies such as streaming, subsampling, pipelines, graph processing, and model serving.

图5.数据分析堆栈。可伸缩机器学习系统是建立在并行和分布式计算平台上的分层体系结构。这里描述的体系结构是由加州大学伯克利分校(University of California,Berkeley)Algorithms,Machines and People(AMP)实验室开发的开源数据分析堆栈,包括与底层操作系统接口的层;提供分布式存储、数据管理和处理的层;以及提供核心机器学习能力的层,如流式处理、子采样、管道、图形处理和模型服务。

The word “environment” also refers to the source of the data, which ranges from a set of people who may have privacy or ownership concerns, to the analyst or decision-maker who may have certain requirements on a machine-learning system (for example, that its output be visualizable), and to the social, legal, or political framework surrounding the deployment of a system. The environment also may include other machinelearning systems or other agents, and the overall collection of systems may be cooperative or adversarial. Broadly speaking, environments provide various resources to a learning algorithm and place constraints on those resources. Increasingly, machine-learning researchers are formalizing theserelationships, aiming to design algorithms that are provably effective in various environments and explicitly allow users to express and control trade-offs among resources.

As an example of resource constraints, let us suppose that the data are provided by a set of individuals who wish to retain a degree of privacy. Privacy can be formalized via the notion of “differential privacy,” which defines a probabilistic channel between the data and the outside world such that an observer of the output of the channel cannot infer reliably whether particular individuals have supplied data or not (18). Classical applications of differential privacy have involved insuring that queries (e.g., “what is the maximum balance across a set of accounts?”) to a privatized database return an answer that is close to that returned on the nonprivate data. Recent research has brought differential privacy into contact with machine learning, where queries involve predictions or other inferential assertions (e.g., “given the data I’ve seen so far, what is the probability that a new transaction is fraudulent?”). Placing the overall design of a privacy-enhancing machine-learning system within a decision-theoretic framework provides users with at uningknob whereby they can choose a desired level of privacy that takes into account the kinds of questions that will be asked of the data and their own personal utility for the answers. For example, a person may be willing to reveal most of their genome in the context of research on a disease that runs in their family but may ask for more stringent protection if information about their genome is being used to set insurance rates.

Communication is another resource that needs to be managed within the overall context of a distributed learning system. For example, data may be distributed across distinct physical locations because their size does not allow them to be aggregated at a single site or because of administrative boundaries. In sucha setting, we may wish to impose a bit-rate communication constraint on the machine-learning algorithm. Solving the design problem under such a constraint will generally show how the performance of the learning system degrades under decrease in communication bandwidth, but it can also reveal how the performance improves as the number of distributed sites (e.g., machines or processors) increases, trading off these quantities against the amount of data . Much as in classical information theory, this line of research aims at fundamental lower bounds on achievable performance and specific algorithms that achieve those lower bounds.

A major goal of this general line of research is to bring the kinds of statisticalresources studied in machine learning (e.g., number of data points, dimension of a parameter, and complexity of a hypothesis class) into contact with the classical computational resources of time and space. Such a bridge is present in the “probably approximately correct” (PAC) learning framework, which studies the effect of adding a polynomial-time computation constraint on this relationship among error rates, training data size, and other parameters of the learning algorithm. Recent advances in this line of research include various lower bounds that establish fundamental gaps in performance achievable in certain machine-learning problems (e.g., sparse regression and sparse principal components analysis) via polynomial-time and exponential-time algorithms . The core of the problem, however, involves time-data tradeoffs that are far from the polynomial/exponential boundary. The large data sets that are increasingly the norm require algorithms whose time and space requirements are linear or sublinear in the problem size (number of data points or number of dimensions). Recent research focuses on methods such as subsampling, random projections, and algorithm weakening to achieve scalability while retaining statistical control . The ultimate goal is to be able to supply time and space budgets to machine-learning systems in addition to accuracy requirements, with the system finding an operating point that allows such requirements to be realized.

“环境”一词也指数据的来源,从一组可能有隐私或所有权顾虑的人,到可能对机器学习系统有某些要求(例如,其输出是可视化的)的分析员或决策者,以及周围的社会、法律或政治框架系统的部署。环境还可能包括其他机器学习系统或其他代理,系统的总体集合可以是协作的或是敌对的。广义地说,环境为学习算法提供各种资源,并对这些资源施加限制。越来越多的机器学习研究人员正将这些关系形式化,旨在设计在各种环境中有效的算法,并明确允许用户表达和控制资源之间的权衡。

作为资源限制的一个例子,我们假设数据是由一组希望保留一定程度隐私的个人提供的。隐私可以通过“差异隐私”的概念正式化,它定义了数据和外部世界之间的一个概率通道,使得通道输出的观察者无法可靠地推断特定个人是否提供了数据。第一类现金流量表涉及确保查询(例如,“一组账户的最大余额是多少?”)对于私有化数据库,返回的答案与在非私有数据上返回的答案相近。最近的研究将差异隐私与机器学习联系起来,在机器学习中,查询涉及预测或其他推断断言(例如,“根据我目前看到的数据,新交易是欺诈的可能性有多大?”)。将隐私增强机器学习系统的总体设计放在决策理论框架内,为用户提供了一个调节旋钮,用户可以选择一个所需的隐私级别,该级别考虑到数据的基本数据和他们自己的个人效用答案。例如,一个人可能愿意在研究家族遗传疾病的背景下揭示他们的大部分基因组,但如果他们的基因组信息被用来设定保险费率,可能会要求更严格的保护。

通信是另一种需要在分布式学习系统的整体环境中进行管理的资源。例如,数据可能分布在不同的物理位置,因为它们的大小不允许在单个站点上聚合,或者由于管理边界的原因。在这种情况下,我们可能希望对机器学习算法施加比特率通信约束。在这样的约束下解决设计问题,通常可以说明学习系统的性能在通信带宽减少的情况下是如何降低的,但它也可以揭示随着分布式站点(如机器或处理器)数量的增加,性能是如何提高的,在这些数量与数据量之间进行权衡。与经典信息理论一样,这一研究方向旨在研究可实现性能的基本下限以及实现这些下限的特定算法。

这一总体研究路线的一个主要目标是使机器学习中研究的各种统计资源(例如,数据点的数量、参数的维数和假设类的复杂性)与经典的时间和空间计算资源相联系。在“可能近似正确”(PAC)学习框架中存在这样一个桥梁,该框架研究添加多项式时间计算约束对错误率、训练数据大小和学习算法的其他参数之间的关系的影响。这一研究领域的最新进展包括通过多项式时间和指数时间算法在某些机器学习问题(例如稀疏回归和稀疏主成分分析)中建立性能的基本差距的各种下界。然而,问题的核心是时间数据的权衡,而这些权衡离多项式/指数边界很远。日益成为范数的大数据集要求算法的时间和空间要求在问题大小(数据点的数目或维数)上是线性或次线性的。最近的研究集中在诸如子抽样、随机预测和算法弱化等方法上,以在保持统计控制的同时实现可扩展性。除了精度要求外,还需要为机器学习系统提供设备和空间预算,系统会找到一个允许实现这些要求的操作点。

Opportunities and challenges

机遇与挑战

Despite its practical and commercial successes, machine learning remains a young field with many underexplored research opportunities. Some of these opportunities can be seen by contrasting current machine-learning approaches to the types of learning we observe in naturally occurring systems such as humans and other animals, organizations, economies, and biological evolution. For example, whereas most machine learning algorithms are targeted to learn one specific function ordata model from one single data source, humans clearly learn many different skills and types of knowledge, from years of diverse training experience, supervised and unsupervised, in a simple-to-more-difficult sequence (e.g., learning to crawl, then walk, then run). This has led some researchers to begin exploring the question of how to construct computer lifelong or never-ending learners that operate nonstop for years, learning thousands of interrelated skills or functions within an overall architecture that allows the system to improve its ability to learn one skill based on having learned another (26–28). Another aspect of the analogy to natural learning systems suggests the idea of team-based, mixed-initiative learning. For example, whereas current machine learning systems typically operate in isolation to analyze the given data, people often work in teams to collect and analyze data (e.g., biologists have worked as teams to collect and analyze genomic data, bringing together diverse experiments and perspectives to make progress on this difficult problem). New machine-learning methods capable of working collaboratively with humans to jointly analyze complex data sets might bring together the abilities of machines to tease out subtle statistical regularities from massive data sets with the abilitiesof humans to drawon diverse background knowledge to generate plausible explanations and suggest new hypotheses. Many theoretical results inmachine learning apply to all learning systems, whether they are computer algorithms, animals, organizations, or natural evolution. As the field progresses, we may see machine-learning theory and algorithms increasingly providing models for understanding learning in neural systems,organizations, and biological evolution and see machine learning benefit from ongoing studies of these other types of learning systems.

As with any powerful technology, machine learning raises questions about which of its potential uses society should encourage and discourage. The push in recent years to collect new kinds of personal data, motivated by its economic value, leads to obvious privacy issues, as mentioned above. The increasing value of data also raises a second ethical issue: Who will have access to, and ownership of, online data, and who will reap its benefits? Currently, much data are collected by corporations for specific uses leading to improved profits, with little or no motive for data sharing. However, the potential benefits that society could realize, even from existing online data, would be considerable if those data were to be made available for public good.

To illustrate, consider one simple example of how society could benefit from data that is already online today by using this data to decrease the risk of global pandemic spread from infectious diseases. By combining location data from online sources (e.g., location data from cell phones, from credit-card transactions at retail outlets, and from security cameras inpublic places and private buildings) with online medical data (e.g., emergency room admissions), it would be feasible today to implement a simple system to telephone individuals immediately if a person they were in close contact with yesterday was just admitted to the emergency room with an infectious disease, alerting them to the symptoms they should watch for and precautions they should take. Here, there is clearly a tension and trade-off between personal privacy and public health, and society at large needs to make the decision on how to make this trade-off. The larger point of this example, however, is that, although the data are already online, we do not currently have the laws, customs, culture, or mechanisms to enable
society to benefit from them, if it wishes to do so. In fact, much of these data are privately held and owned, even though they are data about each of us. Considerations such as these suggest that machine learning is likely to be one of the most transformative technologies of the 21st century. Although it is impossible to predict the future, it appears essential that society begin now to consider how to maximize its benefits.

尽管机器学习在实践和商业上取得了成功,但它仍然是一个年轻的领域,有许多未被充分开发的研究机会。通过将当前的机器学习方法与我们在自然中观察到的学习类型进行对比,可以看到其中一些机会发生系统,如人类和其他动物、组织、经济和生物进化。例如,虽然大多数机器学习算法的目标是从一个单一的数据源学习一个特定的功能或数据模型,但是人类显然从多年的不同训练经验中学习了许多不同的技能和知识类型,不管是有监督的还是无监督的,按照一个简单到更困难的顺序(例如,学习爬行先走,再跑)。这使得一些研究者开始探索如何构建计算机终身学习者或永不停歇学习者的问题,在一个整体架构中学习数千种相互关联的技能或功能,使系统能够在学习另一种技能的基础上提高其学习一种技能的能力。与自然学习系统相类比的另一个方面提出了基于团队的混合主动学习的思想。例如,当前的机器学习系统通常是孤立地操作来分析给定的数据,而人们通常是以团队的方式来收集和分析数据(例如,生物学家以团队的形式收集和分析基因组数据,将不同的实验和观点结合起来,在这个难题上取得进展)。新的机器学习方法能够与人类合作,共同分析复杂的数据集,这可能会将机器从海量数据中梳理出微妙的统计规律的能力与人类利用各种背景知识产生合理解释和建议新的能力结合在一起假设。机器学习的许多理论结果适用于所有的学习系统,无论是计算机算法、动物、组织还是自然进化。随着机器学习理论的不断发展,我们可以看到机器学习模型的不断进步,以及其他生物学习系统的发展。

与任何强大的技术一样,机器学习提出了一个问题,即社会应该鼓励和阻止它的哪些潜在用途。如前所述,近年来,受其经济价值的推动,收集新类型个人数据的趋势导致了明显的隐私问题。数据价值的不断增加也引发了第二个伦理问题:谁将有权访问和拥有在线数据,谁将从中获益?目前,企业收集的大量数据用于特定用途,从而提高了利润,很少或根本没有分享数据的动机。然而,如果将这些数据用于公益事业,那么即使从现有的在线数据中,社会能够实现的潜在利益也将是相当可观的。

为了说明这一点,考虑一个简单的例子,说明社会如何能够从今天已经在线的数据中获益,通过使用这些数据来降低传染病在全球范围内传播的风险。通过将来自在线来源的位置数据(例如,来自手机的位置数据、零售店的信用卡交易数据、公共场所和私人建筑的安全摄像头的位置数据)与在线医疗数据(如急诊室入院)相结合,今天,如果一个与他们有密切接触的人刚刚因传染病住进急诊室,提醒他们应该注意的症状和应该采取的预防措施,那么在今天实施一个简单的系统立即给个人打电话是可行的。在这里,个人隐私和公共健康之间显然存在着一种紧张和权衡,整个社会需要就如何进行这种权衡做出决定。然而,这个例子更重要的一点是,尽管数据已经在线,但我们目前还没有法律、习俗、文化或机制使社会能够从中受益,如果它愿意的话。事实上,这些数据大多是私人持有和拥有的,尽管它们是关于我们每个人的数据。这些考虑表明,机器学习可能是21世纪最具变革性的技术之一。虽然不可能预测未来,但社会现在开始考虑如何使其利益最大化似乎至关重要。

查看全文
如若内容造成侵权/违法违规/事实不符,请联系编程学习网邮箱:809451989@qq.com进行投诉反馈,一经查实,立即删除!

相关文章

  1. redis的前世今生(一)

    写在文章之前redis刚学时只了解了个大概,操作了下基本的增删改查,也没有练手它的常见应用场景,所以对于底层原理和实现过程都不清楚了解。最近在填以前落下的坑,这里记录下redis。深入学习了一番,发现有不少知识盲区,更多陌生的词出现了,或许这就是学习的常态吧。什么是…...

    2024/4/15 17:43:46
  2. 深拷贝、浅拷贝

    浅拷贝nsstring * str1=@”jack”; 常量区,没有产生新的对象nsstring * str2=[str1 copy];深拷贝1、nsmutablestring *str1=[nsmutablestring stringwithformat:@”jack”]; 在堆区nsmutablestring *str2=[str1 copy];2、nsstring *str1=@”jack”; 常量区nsstring *str2…...

    2024/4/28 7:38:40
  3. Qiqy: 选择排序 (C++)(排序算法)(递归实现)

    选择排序 题目 选择排序 排序,顾名思义,是将若干个元素按其大小关系排出一个顺序。形式化描述如下:有n个元素a[1],a[2],…,a[n],从小到大排序就是将它们排成一个新顺序a[i[1]]< a[i[2]]< …< a[i[n]] i[k]为这个新顺序。 选择排序的思想极其简单,每一步都把一…...

    2024/4/28 15:53:08
  4. 设计模式----02

    1、单例模式 1.1 、饿汉式 要点:1》构造器私有化;2》本类创建静态常量内部实例;3》提供一个公有方法返回实例; 1.2、懒汉式 要点:1》构造器私有化;2》提供一个公有方法返回实例,在使用的时候,创建静态常量内部实例); 2、工厂模式 2.1、简单工厂模式2.2、工厂方法模式2.…...

    2024/4/17 9:56:32
  5. 为什么软件发布有 canary 版本?

    Canary 是金丝雀的意思,Canary 版本也就是金丝雀版本。为什么版本发布会叫这个名字呢? 其实这和采矿行业有关系,英文中有一句话叫做 canary in a coal mine,点击查看其 维基百科。它讲的是在工人去矿坑采矿的一种防护措施。由于金丝雀对矿坑的有害气体——比如一氧化碳——…...

    2024/4/28 11:14:34
  6. 常识性知识:::LAN,以太网,WLAN,MAC地址

    LAN,以太网,WLAN,MAC地址是大家在日常用电脑的过程中非常常见的几个名词,但是呢,到底什么意思,都有点懵懵懂懂,似懂非懂。 LAN全称Local Area Net,即局域网的意思,局域网就是一个范围较小的网络,当然,不是所有小范围的网络都是局域网,局域网。如果两个电脑的公网IP…...

    2024/4/27 22:59:47
  7. 数据科学包_Ninth_Chapter

    Pandas时间序列...

    2024/4/18 14:21:59
  8. MySQL基础课堂笔记

    mysql数据库 #2# MySQL基础 ## 数据库的基本概念 1. 数据库的英文单词: DataBase 简称 : DB 2. 什么数据库?* 用于存储和管理数据的仓库。3. 数据库的特点:1. 持久化存储数据的。其实数据库就是一个文件系统2. 方便存储和管理数据3. 使用了统一的方式操作数据库 -- SQL4. 常…...

    2024/4/28 17:08:53
  9. stdma2

    1. // DMA 存储器到外设(串口)数据传输实验#include "stm32f10x.h" #include <stdio.h>#define RECEIVEBUFF_SIZE 20 uint8_t ReceiveBuff[RECEIVEBUFF_SIZE]; void USART_Config(void); void USART1_NVIC_Config(void); void USART1_DMA_Config(void);…...

    2024/4/15 17:43:35
  10. 2020.8.1包装类

    01_包装类 01.1_定义 java为了我们更加方便去操作这些基本数据类型,针对每种基本类型,都提供了他的一个包装类型(引用类型)。 基本类型和包装类的对应byte Byteshort Shortint Integerlong Longfloat Floatdouble Doublechar Characterboolean B…...

    2024/4/28 1:14:36
  11. 例题6-6 UVA 679 Dropping Balls

    完美二叉树,所以下标很有规律可言,从第一个节点来看,如果是偶数,那么向右,奇数向左, 然后向右除以2,向左加一除以2,接着判断就行了。 #include <bits/stdc++.h> using namespace std; int k, n, d; int main() {scanf("%d", &k);while(k--){scanf(…...

    2024/4/28 17:07:22
  12. Android adb shell报错:more than one device/emulator

    使用环境: 一台电脑(win10),一个模拟器(mumu) 报错过程 在cmd中运行 adb kill-server adb connect 127.0.0.1:7555 start study.pyadb shell出现报错 more than one device/emulator查找原因 C:\Users\neo>adb devices List of devices attached 10.192.229.32:5555 …...

    2024/4/28 10:33:00
  13. 超市零售数据可视化分析(Plotly 指南)

    在 GitHub Page 上查看: https://paradiseeee.github.io/2020/07/30/超市零售数据可视化分析/项目首次发布于 Kesci 上 – 超市零售数据分析。感兴趣的可以直接上去 Fork 之后自己做。由于上面只能用 Jupyter Notebook,而且还没有权限 DIY 工作环境,不好玩。于是线下重新做一…...

    2024/4/28 3:26:59
  14. java开发PDF转Word

    pom.xml文件引用java包<!-- PDF转world --><dependency><groupId>org.apache.pdfbox</groupId><artifactId>pdfbox</artifactId><version>2.0.20</version>PDFUtils.javapackage com.view.web.utils;import java.io.ByteArray…...

    2024/4/28 7:08:13
  15. Java基础 day9 集合

    数组长度固定,集合长度不固定(数组实现,内部也不固定) 集合只能存储引用类型,数组基本类型和引用类型都可以 Collection 接口 无下标,无序remove方法一次只能删除一个 使用增强for遍历时,实际使用的是迭代器 迭代器Iterator 专门用来遍历几个元素的一种方法(接口) has…...

    2024/4/28 19:14:32
  16. 重写和重载的区别

    重写和重载的区别 1.重载实现于一个类中;重写实现于子类中。 2.重载(Overload):是一个类中多态性的一种表现,指同一个类中不同的函数使用相同的函数名,但是函数的参数个数或类型不同。可以有不同的返回类型;可以有不同的访问修饰符;可以抛出不同的异常。调用的时候根据函数的…...

    2024/4/28 13:37:07
  17. 旁流水处理器

    ​微晶旁流水处理器,全自动旁流水处理器,循环水物化旁流水处理器:励进微晶旁流水处理器,循环水旁流综合水处理器,旁流水处理器,物化旁流综合水处理器。循环水旁通水处理器采用叠加脉冲的低压电场原理,根据水质自动调整处理信号,并仅需采用旁流式处理。该处理器是在原有…...

    2024/4/28 4:30:06
  18. Docker部署Dubbo Admin全流程

    环境Ubuntu 18.04.4 VirtualBox我的虚拟机配置了两块网卡,一块是NAT模式,一块是Host-Only模式。通过修改netplan配置,我让Host-Only网卡的IP固定为192.168.1.1. 安装Docker curl -fsSL https://get.docker.com -o get-docker.sh sudo sh get-docker.sh --mirror Aliyun将当前…...

    2024/4/28 18:42:17
  19. 这一周,我肝了公司的聚合代扣支付网关!

    这一周,我终于开发完成了公司的聚合代扣支付网关!!!今天就给大家介绍一下微信代扣和支付宝的周期扣款那些事。 一、场景 在很多实际的商业场景中,有很多周期性扣款的需求,比如每个月收一次水电费,小区每个月或每个季度要交一次物业费,腾讯视频会员每个月交18元会员费等…...

    2024/4/28 13:48:11
  20. Python 面向对象,万字干货(初级篇)收藏必备~

    Python 面向对象(初级篇)概述面向过程:根据业务逻辑从上到下写垒代码函数式:将某功能代码封装到函数中,日后便无需重复编写,仅调用函数即可面向对象:对函数进行分类和封装,让开发“更快更好更强...” 面向过程编程最易被初学者接受,其往往用一长段代码来实现指定功能,…...

    2024/4/21 14:54:18

最新文章

  1. STM32入门_江协科技_1~2_OB记录的自学笔记_STM32简介

    1.综述 1.1. 课程简介 手打代码是加入了实操&#xff0c;增加学习效果&#xff1b; STM最小系统板面包板的硬件平台&#xff1b; 配套0.96寸的显示屏&#xff0c;便于调试&#xff1b; 因为使用面板板&#xff0c;所以如果程序现象不出来也有可能是硬件连接的问题&#xff1b; …...

    2024/4/28 19:55:32
  2. 梯度消失和梯度爆炸的一些处理方法

    在这里是记录一下梯度消失或梯度爆炸的一些处理技巧。全当学习总结了如有错误还请留言&#xff0c;在此感激不尽。 权重和梯度的更新公式如下&#xff1a; w w − η ⋅ ∇ w w w - \eta \cdot \nabla w ww−η⋅∇w 个人通俗的理解梯度消失就是网络模型在反向求导的时候出…...

    2024/3/20 10:50:27
  3. Go语言中如何实现继承

    完整课程请点击以下链接 Go 语言项目开发实战_Go_实战_项目开发_孔令飞_Commit 规范_最佳实践_企业应用代码-极客时间 Go语言中没有传统意义上的类和继承的概念&#xff0c;但可以通过嵌入类型&#xff08;embedded types&#xff09;来实现类似的功能。嵌入类型允许一个结构…...

    2024/4/23 6:14:42
  4. windows更新驱动导致Linux虚拟机网卡找不到

    windows更新驱动导致Linux虚拟机网卡找不到 1、现象2、解决过程3、参考 1、现象 原先虚拟机配置了静态IP&#xff0c;更新windows驱动后xshell连接不上这台虚拟机&#xff08;其他几台也是&#xff09;。 2、解决过程 service network restart出现一下报错&#xff1a; Rest…...

    2024/4/25 19:19:14
  5. 416. 分割等和子集问题(动态规划)

    题目 题解 class Solution:def canPartition(self, nums: List[int]) -> bool:# badcaseif not nums:return True# 不能被2整除if sum(nums) % 2 ! 0:return False# 状态定义&#xff1a;dp[i][j]表示当背包容量为j&#xff0c;用前i个物品是否正好可以将背包填满&#xff…...

    2024/4/28 4:04:40
  6. 【Java】ExcelWriter自适应宽度工具类(支持中文)

    工具类 import org.apache.poi.ss.usermodel.Cell; import org.apache.poi.ss.usermodel.CellType; import org.apache.poi.ss.usermodel.Row; import org.apache.poi.ss.usermodel.Sheet;/*** Excel工具类** author xiaoming* date 2023/11/17 10:40*/ public class ExcelUti…...

    2024/4/28 12:01:04
  7. Spring cloud负载均衡@LoadBalanced LoadBalancerClient

    LoadBalance vs Ribbon 由于Spring cloud2020之后移除了Ribbon&#xff0c;直接使用Spring Cloud LoadBalancer作为客户端负载均衡组件&#xff0c;我们讨论Spring负载均衡以Spring Cloud2020之后版本为主&#xff0c;学习Spring Cloud LoadBalance&#xff0c;暂不讨论Ribbon…...

    2024/4/28 16:34:55
  8. TSINGSEE青犀AI智能分析+视频监控工业园区周界安全防范方案

    一、背景需求分析 在工业产业园、化工园或生产制造园区中&#xff0c;周界防范意义重大&#xff0c;对园区的安全起到重要的作用。常规的安防方式是采用人员巡查&#xff0c;人力投入成本大而且效率低。周界一旦被破坏或入侵&#xff0c;会影响园区人员和资产安全&#xff0c;…...

    2024/4/28 18:31:47
  9. VB.net WebBrowser网页元素抓取分析方法

    在用WebBrowser编程实现网页操作自动化时&#xff0c;常要分析网页Html&#xff0c;例如网页在加载数据时&#xff0c;常会显示“系统处理中&#xff0c;请稍候..”&#xff0c;我们需要在数据加载完成后才能继续下一步操作&#xff0c;如何抓取这个信息的网页html元素变化&…...

    2024/4/28 12:01:03
  10. 【Objective-C】Objective-C汇总

    方法定义 参考&#xff1a;https://www.yiibai.com/objective_c/objective_c_functions.html Objective-C编程语言中方法定义的一般形式如下 - (return_type) method_name:( argumentType1 )argumentName1 joiningArgument2:( argumentType2 )argumentName2 ... joiningArgu…...

    2024/4/28 12:01:03
  11. 【洛谷算法题】P5713-洛谷团队系统【入门2分支结构】

    &#x1f468;‍&#x1f4bb;博客主页&#xff1a;花无缺 欢迎 点赞&#x1f44d; 收藏⭐ 留言&#x1f4dd; 加关注✅! 本文由 花无缺 原创 收录于专栏 【洛谷算法题】 文章目录 【洛谷算法题】P5713-洛谷团队系统【入门2分支结构】&#x1f30f;题目描述&#x1f30f;输入格…...

    2024/4/28 12:01:03
  12. 【ES6.0】- 扩展运算符(...)

    【ES6.0】- 扩展运算符... 文章目录 【ES6.0】- 扩展运算符...一、概述二、拷贝数组对象三、合并操作四、参数传递五、数组去重六、字符串转字符数组七、NodeList转数组八、解构变量九、打印日志十、总结 一、概述 **扩展运算符(...)**允许一个表达式在期望多个参数&#xff0…...

    2024/4/28 16:07:14
  13. 摩根看好的前智能硬件头部品牌双11交易数据极度异常!——是模式创新还是饮鸩止渴?

    文 | 螳螂观察 作者 | 李燃 双11狂欢已落下帷幕&#xff0c;各大品牌纷纷晒出优异的成绩单&#xff0c;摩根士丹利投资的智能硬件头部品牌凯迪仕也不例外。然而有爆料称&#xff0c;在自媒体平台发布霸榜各大榜单喜讯的凯迪仕智能锁&#xff0c;多个平台数据都表现出极度异常…...

    2024/4/27 21:08:20
  14. Go语言常用命令详解(二)

    文章目录 前言常用命令go bug示例参数说明 go doc示例参数说明 go env示例 go fix示例 go fmt示例 go generate示例 总结写在最后 前言 接着上一篇继续介绍Go语言的常用命令 常用命令 以下是一些常用的Go命令&#xff0c;这些命令可以帮助您在Go开发中进行编译、测试、运行和…...

    2024/4/28 9:00:42
  15. 用欧拉路径判断图同构推出reverse合法性:1116T4

    http://cplusoj.com/d/senior/p/SS231116D 假设我们要把 a a a 变成 b b b&#xff0c;我们在 a i a_i ai​ 和 a i 1 a_{i1} ai1​ 之间连边&#xff0c; b b b 同理&#xff0c;则 a a a 能变成 b b b 的充要条件是两图 A , B A,B A,B 同构。 必要性显然&#xff0…...

    2024/4/27 18:40:35
  16. 【NGINX--1】基础知识

    1、在 Debian/Ubuntu 上安装 NGINX 在 Debian 或 Ubuntu 机器上安装 NGINX 开源版。 更新已配置源的软件包信息&#xff0c;并安装一些有助于配置官方 NGINX 软件包仓库的软件包&#xff1a; apt-get update apt install -y curl gnupg2 ca-certificates lsb-release debian-…...

    2024/4/28 4:14:21
  17. Hive默认分割符、存储格式与数据压缩

    目录 1、Hive默认分割符2、Hive存储格式3、Hive数据压缩 1、Hive默认分割符 Hive创建表时指定的行受限&#xff08;ROW FORMAT&#xff09;配置标准HQL为&#xff1a; ... ROW FORMAT DELIMITED FIELDS TERMINATED BY \u0001 COLLECTION ITEMS TERMINATED BY , MAP KEYS TERMI…...

    2024/4/27 13:52:15
  18. 【论文阅读】MAG:一种用于航天器遥测数据中有效异常检测的新方法

    文章目录 摘要1 引言2 问题描述3 拟议框架4 所提出方法的细节A.数据预处理B.变量相关分析C.MAG模型D.异常分数 5 实验A.数据集和性能指标B.实验设置与平台C.结果和比较 6 结论 摘要 异常检测是保证航天器稳定性的关键。在航天器运行过程中&#xff0c;传感器和控制器产生大量周…...

    2024/4/27 13:38:13
  19. --max-old-space-size=8192报错

    vue项目运行时&#xff0c;如果经常运行慢&#xff0c;崩溃停止服务&#xff0c;报如下错误 FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory 因为在 Node 中&#xff0c;通过JavaScript使用内存时只能使用部分内存&#xff08;64位系统&…...

    2024/4/28 12:00:58
  20. 基于深度学习的恶意软件检测

    恶意软件是指恶意软件犯罪者用来感染个人计算机或整个组织的网络的软件。 它利用目标系统漏洞&#xff0c;例如可以被劫持的合法软件&#xff08;例如浏览器或 Web 应用程序插件&#xff09;中的错误。 恶意软件渗透可能会造成灾难性的后果&#xff0c;包括数据被盗、勒索或网…...

    2024/4/28 12:00:58
  21. JS原型对象prototype

    让我简单的为大家介绍一下原型对象prototype吧&#xff01; 使用原型实现方法共享 1.构造函数通过原型分配的函数是所有对象所 共享的。 2.JavaScript 规定&#xff0c;每一个构造函数都有一个 prototype 属性&#xff0c;指向另一个对象&#xff0c;所以我们也称为原型对象…...

    2024/4/27 22:51:49
  22. C++中只能有一个实例的单例类

    C中只能有一个实例的单例类 前面讨论的 President 类很不错&#xff0c;但存在一个缺陷&#xff1a;无法禁止通过实例化多个对象来创建多名总统&#xff1a; President One, Two, Three; 由于复制构造函数是私有的&#xff0c;其中每个对象都是不可复制的&#xff0c;但您的目…...

    2024/4/28 7:31:46
  23. python django 小程序图书借阅源码

    开发工具&#xff1a; PyCharm&#xff0c;mysql5.7&#xff0c;微信开发者工具 技术说明&#xff1a; python django html 小程序 功能介绍&#xff1a; 用户端&#xff1a; 登录注册&#xff08;含授权登录&#xff09; 首页显示搜索图书&#xff0c;轮播图&#xff0…...

    2024/4/28 8:32:05
  24. 电子学会C/C++编程等级考试2022年03月(一级)真题解析

    C/C++等级考试(1~8级)全部真题・点这里 第1题:双精度浮点数的输入输出 输入一个双精度浮点数,保留8位小数,输出这个浮点数。 时间限制:1000 内存限制:65536输入 只有一行,一个双精度浮点数。输出 一行,保留8位小数的浮点数。样例输入 3.1415926535798932样例输出 3.1…...

    2024/4/27 20:28:35
  25. 配置失败还原请勿关闭计算机,电脑开机屏幕上面显示,配置失败还原更改 请勿关闭计算机 开不了机 这个问题怎么办...

    解析如下&#xff1a;1、长按电脑电源键直至关机&#xff0c;然后再按一次电源健重启电脑&#xff0c;按F8健进入安全模式2、安全模式下进入Windows系统桌面后&#xff0c;按住“winR”打开运行窗口&#xff0c;输入“services.msc”打开服务设置3、在服务界面&#xff0c;选中…...

    2022/11/19 21:17:18
  26. 错误使用 reshape要执行 RESHAPE,请勿更改元素数目。

    %读入6幅图像&#xff08;每一幅图像的大小是564*564&#xff09; f1 imread(WashingtonDC_Band1_564.tif); subplot(3,2,1),imshow(f1); f2 imread(WashingtonDC_Band2_564.tif); subplot(3,2,2),imshow(f2); f3 imread(WashingtonDC_Band3_564.tif); subplot(3,2,3),imsho…...

    2022/11/19 21:17:16
  27. 配置 已完成 请勿关闭计算机,win7系统关机提示“配置Windows Update已完成30%请勿关闭计算机...

    win7系统关机提示“配置Windows Update已完成30%请勿关闭计算机”问题的解决方法在win7系统关机时如果有升级系统的或者其他需要会直接进入一个 等待界面&#xff0c;在等待界面中我们需要等待操作结束才能关机&#xff0c;虽然这比较麻烦&#xff0c;但是对系统进行配置和升级…...

    2022/11/19 21:17:15
  28. 台式电脑显示配置100%请勿关闭计算机,“准备配置windows 请勿关闭计算机”的解决方法...

    有不少用户在重装Win7系统或更新系统后会遇到“准备配置windows&#xff0c;请勿关闭计算机”的提示&#xff0c;要过很久才能进入系统&#xff0c;有的用户甚至几个小时也无法进入&#xff0c;下面就教大家这个问题的解决方法。第一种方法&#xff1a;我们首先在左下角的“开始…...

    2022/11/19 21:17:14
  29. win7 正在配置 请勿关闭计算机,怎么办Win7开机显示正在配置Windows Update请勿关机...

    置信有很多用户都跟小编一样遇到过这样的问题&#xff0c;电脑时发现开机屏幕显现“正在配置Windows Update&#xff0c;请勿关机”(如下图所示)&#xff0c;而且还需求等大约5分钟才干进入系统。这是怎样回事呢&#xff1f;一切都是正常操作的&#xff0c;为什么开时机呈现“正…...

    2022/11/19 21:17:13
  30. 准备配置windows 请勿关闭计算机 蓝屏,Win7开机总是出现提示“配置Windows请勿关机”...

    Win7系统开机启动时总是出现“配置Windows请勿关机”的提示&#xff0c;没过几秒后电脑自动重启&#xff0c;每次开机都这样无法进入系统&#xff0c;此时碰到这种现象的用户就可以使用以下5种方法解决问题。方法一&#xff1a;开机按下F8&#xff0c;在出现的Windows高级启动选…...

    2022/11/19 21:17:12
  31. 准备windows请勿关闭计算机要多久,windows10系统提示正在准备windows请勿关闭计算机怎么办...

    有不少windows10系统用户反映说碰到这样一个情况&#xff0c;就是电脑提示正在准备windows请勿关闭计算机&#xff0c;碰到这样的问题该怎么解决呢&#xff0c;现在小编就给大家分享一下windows10系统提示正在准备windows请勿关闭计算机的具体第一种方法&#xff1a;1、2、依次…...

    2022/11/19 21:17:11
  32. 配置 已完成 请勿关闭计算机,win7系统关机提示“配置Windows Update已完成30%请勿关闭计算机”的解决方法...

    今天和大家分享一下win7系统重装了Win7旗舰版系统后&#xff0c;每次关机的时候桌面上都会显示一个“配置Windows Update的界面&#xff0c;提示请勿关闭计算机”&#xff0c;每次停留好几分钟才能正常关机&#xff0c;导致什么情况引起的呢&#xff1f;出现配置Windows Update…...

    2022/11/19 21:17:10
  33. 电脑桌面一直是清理请关闭计算机,windows7一直卡在清理 请勿关闭计算机-win7清理请勿关机,win7配置更新35%不动...

    只能是等着&#xff0c;别无他法。说是卡着如果你看硬盘灯应该在读写。如果从 Win 10 无法正常回滚&#xff0c;只能是考虑备份数据后重装系统了。解决来方案一&#xff1a;管理员运行cmd&#xff1a;net stop WuAuServcd %windir%ren SoftwareDistribution SDoldnet start WuA…...

    2022/11/19 21:17:09
  34. 计算机配置更新不起,电脑提示“配置Windows Update请勿关闭计算机”怎么办?

    原标题&#xff1a;电脑提示“配置Windows Update请勿关闭计算机”怎么办&#xff1f;win7系统中在开机与关闭的时候总是显示“配置windows update请勿关闭计算机”相信有不少朋友都曾遇到过一次两次还能忍但经常遇到就叫人感到心烦了遇到这种问题怎么办呢&#xff1f;一般的方…...

    2022/11/19 21:17:08
  35. 计算机正在配置无法关机,关机提示 windows7 正在配置windows 请勿关闭计算机 ,然后等了一晚上也没有关掉。现在电脑无法正常关机...

    关机提示 windows7 正在配置windows 请勿关闭计算机 &#xff0c;然后等了一晚上也没有关掉。现在电脑无法正常关机以下文字资料是由(历史新知网www.lishixinzhi.com)小编为大家搜集整理后发布的内容&#xff0c;让我们赶快一起来看一下吧&#xff01;关机提示 windows7 正在配…...

    2022/11/19 21:17:05
  36. 钉钉提示请勿通过开发者调试模式_钉钉请勿通过开发者调试模式是真的吗好不好用...

    钉钉请勿通过开发者调试模式是真的吗好不好用 更新时间:2020-04-20 22:24:19 浏览次数:729次 区域: 南阳 > 卧龙 列举网提醒您:为保障您的权益,请不要提前支付任何费用! 虚拟位置外设器!!轨迹模拟&虚拟位置外设神器 专业用于:钉钉,外勤365,红圈通,企业微信和…...

    2022/11/19 21:17:05
  37. 配置失败还原请勿关闭计算机怎么办,win7系统出现“配置windows update失败 还原更改 请勿关闭计算机”,长时间没反应,无法进入系统的解决方案...

    前几天班里有位学生电脑(windows 7系统)出问题了&#xff0c;具体表现是开机时一直停留在“配置windows update失败 还原更改 请勿关闭计算机”这个界面&#xff0c;长时间没反应&#xff0c;无法进入系统。这个问题原来帮其他同学也解决过&#xff0c;网上搜了不少资料&#x…...

    2022/11/19 21:17:04
  38. 一个电脑无法关闭计算机你应该怎么办,电脑显示“清理请勿关闭计算机”怎么办?...

    本文为你提供了3个有效解决电脑显示“清理请勿关闭计算机”问题的方法&#xff0c;并在最后教给你1种保护系统安全的好方法&#xff0c;一起来看看&#xff01;电脑出现“清理请勿关闭计算机”在Windows 7(SP1)和Windows Server 2008 R2 SP1中&#xff0c;添加了1个新功能在“磁…...

    2022/11/19 21:17:03
  39. 请勿关闭计算机还原更改要多久,电脑显示:配置windows更新失败,正在还原更改,请勿关闭计算机怎么办...

    许多用户在长期不使用电脑的时候&#xff0c;开启电脑发现电脑显示&#xff1a;配置windows更新失败&#xff0c;正在还原更改&#xff0c;请勿关闭计算机。。.这要怎么办呢&#xff1f;下面小编就带着大家一起看看吧&#xff01;如果能够正常进入系统&#xff0c;建议您暂时移…...

    2022/11/19 21:17:02
  40. 还原更改请勿关闭计算机 要多久,配置windows update失败 还原更改 请勿关闭计算机,电脑开机后一直显示以...

    配置windows update失败 还原更改 请勿关闭计算机&#xff0c;电脑开机后一直显示以以下文字资料是由(历史新知网www.lishixinzhi.com)小编为大家搜集整理后发布的内容&#xff0c;让我们赶快一起来看一下吧&#xff01;配置windows update失败 还原更改 请勿关闭计算机&#x…...

    2022/11/19 21:17:01
  41. 电脑配置中请勿关闭计算机怎么办,准备配置windows请勿关闭计算机一直显示怎么办【图解】...

    不知道大家有没有遇到过这样的一个问题&#xff0c;就是我们的win7系统在关机的时候&#xff0c;总是喜欢显示“准备配置windows&#xff0c;请勿关机”这样的一个页面&#xff0c;没有什么大碍&#xff0c;但是如果一直等着的话就要两个小时甚至更久都关不了机&#xff0c;非常…...

    2022/11/19 21:17:00
  42. 正在准备配置请勿关闭计算机,正在准备配置windows请勿关闭计算机时间长了解决教程...

    当电脑出现正在准备配置windows请勿关闭计算机时&#xff0c;一般是您正对windows进行升级&#xff0c;但是这个要是长时间没有反应&#xff0c;我们不能再傻等下去了。可能是电脑出了别的问题了&#xff0c;来看看教程的说法。正在准备配置windows请勿关闭计算机时间长了方法一…...

    2022/11/19 21:16:59
  43. 配置失败还原请勿关闭计算机,配置Windows Update失败,还原更改请勿关闭计算机...

    我们使用电脑的过程中有时会遇到这种情况&#xff0c;当我们打开电脑之后&#xff0c;发现一直停留在一个界面&#xff1a;“配置Windows Update失败&#xff0c;还原更改请勿关闭计算机”&#xff0c;等了许久还是无法进入系统。如果我们遇到此类问题应该如何解决呢&#xff0…...

    2022/11/19 21:16:58
  44. 如何在iPhone上关闭“请勿打扰”

    Apple’s “Do Not Disturb While Driving” is a potentially lifesaving iPhone feature, but it doesn’t always turn on automatically at the appropriate time. For example, you might be a passenger in a moving car, but your iPhone may think you’re the one dri…...

    2022/11/19 21:16:57