问题描述

运行程序,出现报错信息 TypeError: can't convert CUDA tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.

具体信息如下所示:

Traceback (most recent call last):File "tools/demo.py", line 97, in <module>visualize_result(gallery_img, detections, similarities)File "tools/demo.py", line 41, in visualize_result(x1, y1), x2 - x1, y2 - y1, fill=False, edgecolor="#4CAF50", linewidth=3.5File "/environment/miniconda3/lib/python3.7/site-packages/matplotlib/axes/_base.py", line 2358, in add_patchself._update_patch_limits(p)File "/environment/miniconda3/lib/python3.7/site-packages/matplotlib/axes/_base.py", line 2381, in _update_patch_limitspatch_trf = patch.get_transform()File "/environment/miniconda3/lib/python3.7/site-packages/matplotlib/patches.py", line 278, in get_transformreturn self.get_patch_transform() + artist.Artist.get_transform(self)File "/environment/miniconda3/lib/python3.7/site-packages/matplotlib/patches.py", line 752, in get_patch_transformbbox = self.get_bbox()File "/environment/miniconda3/lib/python3.7/site-packages/matplotlib/patches.py", line 845, in get_bboxreturn transforms.Bbox.from_extents(x0, y0, x1, y1)File "/environment/miniconda3/lib/python3.7/site-packages/matplotlib/transforms.py", line 839, in from_extentsbbox = Bbox(np.reshape(args, (2, 2)))File "<__array_function__ internals>", line 6, in reshapeFile "/home/featurize/work/.local/lib/python3.7/site-packages/numpy/core/fromnumeric.py", line 298, in reshapereturn _wrapfunc(a, 'reshape', newshape, order=order)File "/home/featurize/work/.local/lib/python3.7/site-packages/numpy/core/fromnumeric.py", line 54, in _wrapfuncreturn _wrapit(obj, method, *args, **kwds)File "/home/featurize/work/.local/lib/python3.7/site-packages/numpy/core/fromnumeric.py", line 43, in _wrapitresult = getattr(asarray(obj), method)(*args, **kwds)File "/home/featurize/work/.local/lib/python3.7/site-packages/torch/tensor.py", line 458, in __array__return self.numpy()
TypeError: can't convert CUDA tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.

解决办法

这个问题是由 python 3.7 版本引起的。修改部分 python 源码即可。

根据报错信息,定位到 /home/featurize/work/.local/lib/python3.7/site-packages/torch/tensor.py

将 self.numpy() 改成 self.cpu().numpy(),即找到 tensor.py 的第 458 行

    def __array__(self, dtype=None):if dtype is None:return self.numpy()else:return self.numpy().astype(dtype, copy=False)

改成

    def __array__(self, dtype=None):if dtype is None:return self.cpu().numpy()else:return self.cpu().numpy().astype(dtype, copy=False)

使用 vim 修改

  1. 如果服务器上装有 vim,可以使用 vim 打开 tensor.py 文件
cd /home/featurize/work/.local/lib/python3.7/site-packages/torch
vim tensor.py
  1. 在 vim 中键入 /numpy 搜索 numpy ,输入 n 向下搜索,N 向上搜索
  2. 输入 i 切换到编辑模式,进行修改
  3. 修改完成,使用 Esc 键切换到普通模式。输入 :wq 保存 tensor.py 文件并退出 vim

使用 Jupyter notebook

如果使用 notebook 运行代码,如 colab 环境下,则可以使用魔术命令修改 tensor.py 文件。

  1. 首先切换到 tensor.py 所在目录下
cd /home/featurize/work/.local/lib/python3.7/site-packages/torch

注意:colab 的 tensor 在 usr/local/lib/python3.7/dist-packages/torch

  1. 使用 %pycat tensor.py 打开 tensor.py 文件
  2. 复制 tensor.py 中的所有代码到一个单元格中,将 self.numpy() 改成 self.cpu().numpy()
  3. 移除原有 tensor.py 文件 !rm tensor.py
  4. 在单元格的开头加入 %%writefile tensor.py,运行单元格生成新的 tensor.py 文件。再次运行就不会报错。

当然,也可以直接切换到 tensor.py 所在目录下。把下面我修改好的代码复制到一个单元格中,运行单元格写入修改过的 tensor.py 文件。跳过上面的步骤 2-5 。

%%writefile tensor.py
import sys
import torch
import torch._C as _C
from collections import OrderedDict
import torch.utils.hooks as hooks
import warnings
import weakref
from torch._six import imap
from torch._C import _add_docstr
from numbers import Number# NB: If you subclass Tensor, and want to share the subclassed class
# across processes, you must also update torch/multiprocessing/reductions.py
# to define a ForkingPickler serialization mode for the class.
#
# NB: If you add a new method to Tensor, you must update
# torch/__init__.py.in to add a type annotation for your method;
# otherwise, it will not show up in autocomplete.
class Tensor(torch._C._TensorBase):def __deepcopy__(self, memo):if not self.is_leaf:raise RuntimeError("Only Tensors created explicitly by the user ""(graph leaves) support the deepcopy protocol at the moment")if id(self) in memo:return memo[id(self)]with torch.no_grad():if self.is_sparse:new_tensor = self.clone()else:new_storage = self.storage().__deepcopy__(memo)new_tensor = self.new()new_tensor.set_(new_storage, self.storage_offset(), self.size(), self.stride())memo[id(self)] = new_tensornew_tensor.requires_grad = self.requires_gradreturn new_tensordef __reduce_ex__(self, proto):# See Note [Don't serialize hooks]torch.utils.hooks.warn_if_has_hooks(self)args = (self.storage(),self.storage_offset(),tuple(self.size()),self.stride(),self.requires_grad,OrderedDict())  # previously was self._backward_hooksreturn (torch._utils._rebuild_tensor_v2, args)def __setstate__(self, state):# Warning: this method is NOT called when you torch.load() a tensor;# that is managed by _rebuild_tensor_v2if not self.is_leaf:raise RuntimeError('__setstate__ can be only called on leaf Tensors')if len(state) == 4:# legacy serialization of Tensorself.set_(*state)returnelif len(state) == 5:# legacy serialization of Variableself.data = state[0]state = (state[3], state[4], state[2])# The setting of _backward_hooks is expected to be a no-op.# See Note [Don't serialize hooks]self.requires_grad, _, self._backward_hooks = statedef __repr__(self):# All strings are unicode in Python 3, while we have to encode unicode# strings in Python2. If we can't, let python decide the best# characters to replace unicode characters with.if sys.version_info > (3,):return torch._tensor_str._str(self)else:if hasattr(sys.stdout, 'encoding'):return torch._tensor_str._str(self).encode(sys.stdout.encoding or 'UTF-8', 'replace')else:return torch._tensor_str._str(self).encode('UTF-8', 'replace')def backward(self, gradient=None, retain_graph=None, create_graph=False):r"""Computes the gradient of current tensor w.r.t. graph leaves.The graph is differentiated using the chain rule. If the tensor isnon-scalar (i.e. its data has more than one element) and requiresgradient, the function additionally requires specifying ``gradient``.It should be a tensor of matching type and location, that containsthe gradient of the differentiated function w.r.t. ``self``.This function accumulates gradients in the leaves - you might need tozero them before calling it.Arguments:gradient (Tensor or None): Gradient w.r.t. thetensor. If it is a tensor, it will be automatically convertedto a Tensor that does not require grad unless ``create_graph`` is True.None values can be specified for scalar Tensors or ones thatdon't require grad. If a None value would be acceptable thenthis argument is optional.retain_graph (bool, optional): If ``False``, the graph used to computethe grads will be freed. Note that in nearly all cases settingthis option to True is not needed and often can be worked aroundin a much more efficient way. Defaults to the value of``create_graph``.create_graph (bool, optional): If ``True``, graph of the derivative willbe constructed, allowing to compute higher order derivativeproducts. Defaults to ``False``."""torch.autograd.backward(self, gradient, retain_graph, create_graph)def register_hook(self, hook):r"""Registers a backward hook.The hook will be called every time a gradient with respect to theTensor is computed. The hook should have the following signature::hook(grad) -> Tensor or NoneThe hook should not modify its argument, but it can optionally returna new gradient which will be used in place of :attr:`grad`.This function returns a handle with a method ``handle.remove()``that removes the hook from the module.Example::>>> v = torch.tensor([0., 0., 0.], requires_grad=True)>>> h = v.register_hook(lambda grad: grad * 2)  # double the gradient>>> v.backward(torch.tensor([1., 2., 3.]))>>> v.grad246[torch.FloatTensor of size (3,)]>>> h.remove()  # removes the hook"""if not self.requires_grad:raise RuntimeError("cannot register a hook on a tensor that ""doesn't require gradient")if self._backward_hooks is None:self._backward_hooks = OrderedDict()if self.grad_fn is not None:self.grad_fn._register_hook_dict(self)handle = hooks.RemovableHandle(self._backward_hooks)self._backward_hooks[handle.id] = hookreturn handledef reinforce(self, reward):def trim(str):return '\n'.join([line.strip() for line in str.split('\n')])raise RuntimeError(trim(r"""reinforce() was removed.Use torch.distributions instead.See https://pytorch.org/docs/master/distributions.htmlInstead of:probs = policy_network(state)action = probs.multinomial()next_state, reward = env.step(action)action.reinforce(reward)action.backward()Use:probs = policy_network(state)# NOTE: categorical is equivalent to what used to be called multinomialm = torch.distributions.Categorical(probs)action = m.sample()next_state, reward = env.step(action)loss = -m.log_prob(action) * rewardloss.backward()"""))detach = _add_docstr(_C._TensorBase.detach, r"""Returns a new Tensor, detached from the current graph.The result will never require gradient... note::Returned Tensor shares the same storage with the original one.In-place modifications on either of them will be seen, and may triggererrors in correctness checks.IMPORTANT NOTE: Previously, in-place size / stride / storage changes(such as `resize_` / `resize_as_` / `set_` / `transpose_`) to the returned tensoralso update the original tensor. Now, these in-place changes will not update theoriginal tensor anymore, and will instead trigger an error.For sparse tensors:In-place indices / values changes (such as `zero_` / `copy_` / `add_`) to thereturned tensor will not update the original tensor anymore, and will insteadtrigger an error.""")detach_ = _add_docstr(_C._TensorBase.detach_, r"""Detaches the Tensor from the graph that created it, making it a leaf.Views cannot be detached in-place.""")def retain_grad(self):r"""Enables .grad attribute for non-leaf Tensors."""if self.grad_fn is None:  # no-op for leavesreturnif not self.requires_grad:raise RuntimeError("can't retain_grad on Tensor that has requires_grad=False")if hasattr(self, 'retains_grad'):returnweak_self = weakref.ref(self)def retain_grad_hook(grad):var = weak_self()if var is None:returnif var._grad is None:var._grad = grad.clone()else:var._grad = var._grad + gradself.register_hook(retain_grad_hook)self.retains_grad = Truedef is_pinned(self):r"""Returns true if this tensor resides in pinned memory"""storage = self.storage()return storage.is_pinned() if storage else Falsedef is_shared(self):r"""Checks if tensor is in shared memory.This is always ``True`` for CUDA tensors."""return self.storage().is_shared()def share_memory_(self):r"""Moves the underlying storage to shared memory.This is a no-op if the underlying storage is already in shared memoryand for CUDA tensors. Tensors in shared memory cannot be resized."""self.storage().share_memory_()return selfdef __reversed__(self):r"""Reverses the tensor along dimension 0."""if self.dim() == 0:return selfelse:return self.flip(0)def norm(self, p="fro", dim=None, keepdim=False, dtype=None):r"""See :func:`torch.norm`"""return torch.norm(self, p, dim, keepdim, dtype=dtype)def pstrf(self, upper=True):r"""See :func:`torch.pstrf`"""warnings.warn("torch.pstrf is deprecated in favour of torch.cholesky and will be removed ""in the next release.", stacklevel=2)return super(Tensor, self).pstrf(upper=upper)def potrf(self, upper=True):r"""See :func:`torch.cholesky`"""warnings.warn("torch.potrf is deprecated in favour of torch.cholesky and will be removed ""in the next release. Please use torch.cholesky instead and note that the "":attr:`upper` argument in torch.cholesky defaults to ``False``.", stacklevel=2)return super(Tensor, self).cholesky(upper=upper)def potri(self, upper=True):r"""See :func:`torch.cholesky_inverse`"""warnings.warn("torch.potri is deprecated in favour of torch.cholesky_inverse and will be ""removed in the next release. Please use torch.cholesky_inverse instead and ""note that the :attr:`upper` argument in torch.cholesky_inverse defaults to ""``False``.", stacklevel=2)return super(Tensor, self).cholesky_inverse(upper=upper)def potrs(self, u, upper=True):r"""See :func:`torch.cholesky_solve`"""warnings.warn("torch.potrs is deprecated in favour of torch.cholesky_solve and ""will be removed in the next release. Please use torch.cholesky_solve instead ""and note that the :attr:`upper` argument in torch.cholesky_solve defaults ""to ``False``.", stacklevel=2)return super(Tensor, self).cholesky_solve(u, upper=upper)def gesv(self, A):r"""See :func:`torch.solve`"""warnings.warn("torch.gesv is deprecated in favour of torch.solve and will be removed in the ""next release. Please use torch.solve instead.", stacklevel=2)return super(Tensor, self).solve(A)def trtrs(self, A, upper=True, transpose=False, unitriangular=False):r"""See :func:`torch.triangular_solve`"""warnings.warn("torch.trtrs is deprecated in favour of torch.triangular_solve and will be ""removed in the next release. Please use torch.triangular_solve instead.",stacklevel=2)return super(Tensor, self).triangular_solve(A, upper=upper,transpose=transpose, unitriangular=unitriangular)def btrifact(self, pivot=True):r"""See :func:`torch.lu`"""warnings.warn("torch.btrifact is deprecated in favour of torch.lu and will be removed in ""the next release. Please use torch.lu instead.", stacklevel=2)return torch._lu_with_info(self, pivot=pivot, check_errors=True)def btrifact_with_info(self, pivot=True):r"""See :func:`torch.lu`"""warnings.warn("torch.btrifact_with_info is deprecated in favour of torch.lu with the ""get_infos argument and will be removed in the next release. Please use ""torch.lu with the get_infos argument set to True instead.", stacklevel=2)return torch._lu_with_info(self, pivot=pivot, check_errors=False)def btrisolve(self, LU_data, LU_pivots):r"""See :func:`torch.lu_solve`"""warnings.warn("torch.btrisolve is deprecated in favour of torch.lu_solve and will be ""removed in the next release. Please use torch.lu_solve instead.",stacklevel=2)return super(Tensor, self).lu_solve(LU_data=LU_data, LU_pivots=LU_pivots)def lu(self, pivot=True, get_infos=False):r"""See :func:`torch.lu`"""# If get_infos is True, then we don't need to check for errors and vice versaLU, pivots, infos = torch._lu_with_info(self, pivot=pivot, check_errors=(not get_infos))if get_infos:return LU, pivots, infoselse:return LU, pivotsdef stft(self, n_fft, hop_length=None, win_length=None, window=None,center=True, pad_mode='reflect', normalized=False, onesided=True):r"""See :func:`torch.stft`.. warning::This function changed signature at version 0.4.1. Calling withthe previous signature may cause error or return incorrect result."""return torch.stft(self, n_fft, hop_length, win_length, window, center,pad_mode, normalized, onesided)def resize(self, *sizes):warnings.warn("non-inplace resize is deprecated")from torch.autograd._functions import Resizereturn Resize.apply(self, sizes)def resize_as(self, tensor):warnings.warn("non-inplace resize_as is deprecated")from torch.autograd._functions import Resizereturn Resize.apply(self, tensor.size())def split(self, split_size, dim=0):r"""See :func:`torch.split`"""if isinstance(split_size, int):return super(Tensor, self).split(split_size, dim)else:return super(Tensor, self).split_with_sizes(split_size, dim)def unique(self, sorted=True, return_inverse=False, return_counts=False, dim=None):r"""Returns the unique elements of the input tensor.See :func:`torch.unique`"""return torch.unique(self, sorted=sorted, return_inverse=return_inverse, return_counts=return_counts, dim=dim)def unique_consecutive(self, return_inverse=False, return_counts=False, dim=None):r"""Eliminates all but the first element from every consecutive group of equivalent elements.See :func:`torch.unique_consecutive`"""return torch.unique_consecutive(self, return_inverse=return_inverse, return_counts=return_counts, dim=dim)def __rsub__(self, other):return _C._VariableFunctions.rsub(self, other)def __rdiv__(self, other):if self.dtype.is_floating_point:return self.reciprocal() * otherelse:return (self.double().reciprocal() * other).type_as(self)__rtruediv__ = __rdiv____itruediv__ = _C._TensorBase.__idiv____pow__ = _C._TensorBase.powdef __format__(self, format_spec):if self.dim() == 0:return self.item().__format__(format_spec)return object.__format__(self, format_spec)def __ipow__(self, other):raise NotImplementedError("in-place pow not implemented")def __rpow__(self, other):return self.new_tensor(other) ** selfdef __floordiv__(self, other):result = self / otherif result.dtype.is_floating_point:result = result.trunc()return resultdef __rfloordiv__(self, other):result = other / selfif result.dtype.is_floating_point:result = result.trunc()return result__neg__ = _C._TensorBase.neg__eq__ = _C._TensorBase.eq__ne__ = _C._TensorBase.ne__lt__ = _C._TensorBase.lt__le__ = _C._TensorBase.le__gt__ = _C._TensorBase.gt__ge__ = _C._TensorBase.ge__abs__ = _C._TensorBase.absdef __len__(self):if self.dim() == 0:raise TypeError("len() of a 0-d tensor")return self.shape[0]def __iter__(self):# NB: we use 'imap' and not 'map' here, so that in Python 2 we get a# generator and don't eagerly perform all the indexes.  This could# save us work, and also helps keep trace ordering deterministic# (e.g., if you zip(*hiddens), the eager map will force all the# indexes of hiddens[0] before hiddens[1], while the generator# map will interleave them.)if self.dim() == 0:raise TypeError('iteration over a 0-d tensor')if torch._C._get_tracing_state():warnings.warn('Iterating over a tensor might cause the trace to be incorrect. ''Passing a tensor of different shape won\'t change the number of ''iterations executed (and might lead to errors or silently give ''incorrect results).', category=RuntimeWarning)return iter(imap(lambda i: self[i], range(self.size(0))))def __hash__(self):return id(self)def __dir__(self):tensor_methods = dir(self.__class__)tensor_methods.remove('volatile')  # deprecatedattrs = list(self.__dict__.keys())keys = tensor_methods + attrs# property only available dense, cuda tensorsif (not self.is_cuda) or self.is_sparse:keys.remove("__cuda_array_interface__")return sorted(keys)# Numpy array interface, to support `numpy.asarray(tensor) -> ndarray`__array_priority__ = 1000    # prefer Tensor ops over numpy onesdef __array__(self, dtype=None):if dtype is None:return self.cpu().numpy()else:return self.cpu().numpy().astype(dtype, copy=False)# Wrap Numpy array again in a suitable tensor when done, to support e.g.# `numpy.sin(tensor) -> tensor` or `numpy.greater(tensor, 0) -> ByteTensor`def __array_wrap__(self, array):if array.dtype == bool:# Workaround, torch has no built-in bool tensorarray = array.astype('uint8')return torch.from_numpy(array)def __contains__(self, element):r"""Check if `element` is present in tensorArguments:element (Tensor or scalar): element to be checkedfor presence in current tensor""""if isinstance(element, (torch.Tensor, Number)):return (element == self).any().item()return NotImplemented@propertydef __cuda_array_interface__(self):"""Array view description for cuda tensors.See:https://numba.pydata.org/numba-doc/latest/cuda/cuda_array_interface.html"""# raise AttributeError for unsupported tensors, so that# hasattr(cpu_tensor, "__cuda_array_interface__") is False.if not self.is_cuda:raise AttributeError("Can't get __cuda_array_interface__ on non-CUDA tensor type: %s ""If CUDA data is required use tensor.cuda() to copy tensor to device memory." %self.type())if self.is_sparse:raise AttributeError("Can't get __cuda_array_interface__ on sparse type: %s ""Use Tensor.to_dense() to convert to a dense tensor first." %self.type())# RuntimeError, matching tensor.__array__() behavior.if self.requires_grad:raise RuntimeError("Can't get __cuda_array_interface__ on Variable that requires grad. ""If gradients aren't required, use var.detach() to get Variable that doesn't require grad.")# CUDA devices are little-endian and tensors are stored in native byte# order. 1-byte entries are endian-agnostic.typestr = {torch.float16: "<f2",torch.float32: "<f4",torch.float64: "<f8",torch.uint8: "|u1",torch.int8: "|i1",torch.int16: "<i2",torch.int32: "<i4",torch.int64: "<i8",}[self.dtype]itemsize = self.storage().element_size()shape = self.shapestrides = tuple(s * itemsize for s in self.stride())data = (self.data_ptr(), False)  # read-only is falsereturn dict(typestr=typestr, shape=shape, strides=strides, data=data, version=0)__module__ = 'torch'
查看全文
如若内容造成侵权/违法违规/事实不符,请联系编程学习网邮箱:809451989@qq.com进行投诉反馈,一经查实,立即删除!

相关文章

  1. 【算法竞赛学习】心跳信号预测分类-赛题理解

    心跳信号预测分类-Task1 赛题理解 Task1赛题理解 赛题以心电图数据为背景&#xff0c;要求选手根据心电图感应数据预测心跳信号&#xff0c;其中心跳信号对应正常病例以及受不同心律不齐和心肌梗塞影响的病例&#xff0c;这是一个多分类的问题。通过这道赛题来引导大家了解医…...

    2024/4/13 6:07:43
  2. 以太坊 solidity pure view 交易执行之后不会生成区块

    函数可以被声明为pure、view&#xff0c;两者的作用可见下图。 函数类型 作用 pure 承诺不读取或修改状态。 view 保证不修改状态。 pure&#xff1a;不读取更不修改区块上的变量&#xff0c;使用本机的CPU资源计算我们的函数。所以不消耗任何的资源这是很容易的理解的。 …...

    2024/4/13 6:08:43
  3. 彻底卸载Keil4和Keil5

    https://blog.csdn.net/qq_37277944/article/details/116244080...

    2024/4/13 6:07:48
  4. 2022年R1快开门式压力容器操作考试题及R1快开门式压力容器操作模拟考试

    题库来源&#xff1a;安全生产模拟考试一点通公众号小程序 安全生产模拟考试一点通&#xff1a;R1快开门式压力容器操作考试题是安全生产模拟考试一点通总题库中生成的一套R1快开门式压力容器操作模拟考试&#xff0c;安全生产模拟考试一点通上R1快开门式压力容器操作作业手机…...

    2024/4/13 6:08:38
  5. 总结我工作多年来必备的办公神器,收藏好哟

    作为我们公司活动策划分部的经理&#xff0c;我今天在这里和各位同行们分享下我工作五六年已来总结的设计的软件&#xff0c;希望对刚刚入行的小萌新们有所帮助&#xff1a; 1. Pixso Pixso是一款专业的设计软件&#xff0c;但是我在使用后发现&#xff0c;拿他来做策划一点也…...

    2024/4/13 6:07:38
  6. 推荐系统(八)FNN模型(FM+MLP=FNN)

    推荐系统&#xff08;八&#xff09;FNN模型&#xff08;FMMLPFNN&#xff09;推荐系统系列博客&#xff1a; 推荐系统&#xff08;一&#xff09;推荐系统整体概览推荐系统&#xff08;二&#xff09;GBDTLR模型推荐系统&#xff08;三&#xff09;Factorization Machines&am…...

    2024/4/13 6:07:33
  7. C# WinForm GUI之控件

    C# WinForm GUI之控件Button控件Label控件与LinkLabel控件TextBox控件PictureBox控件Timer控件容器类控件RadioButton控件CheckBox控件MenuStrip控件对话框设计Button控件 派生于ButtonBase&#xff0c;单击或Enter键触发Click事件。 常用属性&#xff1a; Text标题、Name名称…...

    2024/5/4 9:54:39
  8. 2022年安全员-B证报名考试及安全员-B证模拟考试题库

    题库来源&#xff1a;安全生产模拟考试一点通公众号小程序 安全生产模拟考试一点通&#xff1a;安全员-B证报名考试是安全生产模拟考试一点通生成的&#xff0c;安全员-B证证模拟考试题库是根据安全员-B证最新版教材汇编出安全员-B证仿真模拟考试。2022年安全员-B证报名考试及…...

    2024/4/13 6:08:43
  9. 1. node_命令行窗口

    1.命令行窗口(小黑屏)、CMD窗口、终端、shell - 开始菜单 --> 运行 --> CMD --> 回车 - 常用的指令&#xff1a;dir 列出当前目录下的所有文件cd 目录名 进入到指定的目录md 目录名 创建一个文件夹rd 目录名 删除一个文件夹 直接输入文件名打开当前目录里的文件- 目录…...

    2024/4/13 16:14:07
  10. 10-gorm-06-更新

    1. 简单示例 从一个示例开始说明 以有表格如下 mysql> select * from xi_shu; ---------------------- | id | name | age | ---------------------- | 1 | LiuBei | 28 | | 2 | GuanYu | 22 | | 3 | ZhangFei | 20 | | 4 | ZhaoYun | 18 …...

    2024/4/13 6:08:28
  11. Java八大算法:插入排序

    基本原理 插入排序&#xff08;英语&#xff1a;Insertion Sort&#xff09;是一种简单直观的排序算法。它的工作原理是通过构建有序序列&#xff0c;对于未排序数据&#xff0c;在已排序序列中从后向前扫描&#xff0c;找到相应位置并插入。插入排序在实现上&#xff0c;在从…...

    2024/4/13 6:08:23
  12. Leetcode_175_组合两个表_SQL

    left join是以左表为主 select FirstName, LastName, City, state from Person left join Address on Person.PersonId Address.PersonId...

    2024/4/13 16:14:48
  13. 门子程序S7-1200小车配料 品牌型号 西门子S7-1200和昆仑通态触摸屏 功能:配料、USS变频器通讯。

    门子程序S7-1200小车配料 品牌型号 西门子S7-1200和昆仑通态触摸屏 功能&#xff1a;配料、USS变频器通讯。编号:4140658903114423大爆米花...

    2024/4/13 6:08:18
  14. Unity 通过www请求发送Form表单,Node服务器post接收数据并返回值

    Node端: var express=require(express); var fs=require(fs); var url=require(url); var bodyParser=require(body-parser); var app=express(); app.use(bodyParser.urlencoded({extended:false})) //这句很重要 //接收用户id,并去数据库查询 app.post(/SkierCamInf…...

    2024/4/20 7:29:05
  15. 最强招式之函数与程序结构

    1.写在前面 前面我们学了对数据进行处理&#xff0c;学会了简单的流程的管理&#xff0c;但是程序不能只有一个方法吧&#xff0c;这样程序就比较混乱&#xff0c;所以我们需要将程序的结构的进行相应的拆分和组合起来&#xff0c;这样就比较易于维护。 2.函数的基本知识 我…...

    2024/4/13 6:08:48
  16. OPENCV总结

    特别提醒&#xff1a; opencv的版本问题&#xff1a;建议使用cv2的 3.4.2的版本 因为在3.4版本之后&#xff0c;因为sift的算法申请专利&#xff0c;如无法使用该函数 sift cv2.xfeatures2d.SIFT_create() 基础知识部分 1.图像的组成与读取 import cv2 #opencv读取的格式…...

    2024/4/13 6:08:33
  17. 乱序报文可以完成TCP三次握手吗

    如下TCP握手时序&#xff1a; 问加粗的那个报文可以完成TCP握手吗&#xff1f;server端的accept会返回吗&#xff1f; 答案是肯定的&#xff1a; 只要到达server报文的ack字段等于server syn1&#xff0c;即可完成握手。 TCP协议属全双工&#xff0c;3rd ack实属server到cli…...

    2024/4/13 6:08:28
  18. 看动画,学JavaIO教程11:字符编码

    看动画&#xff0c;学JavaIO教程11&#xff1a;字符编码 观看视频 视频全集 看动画&#xff0c;学JavaIO 其他教程 看动画&#xff0c;学Java集合 看动画&#xff0c;学Java同步器 看动画&#xff0c;学Java线程池 看动画&#xff0c;学Java多线程 代码 Github地址【全…...

    2024/4/18 6:38:55
  19. 【剑指 offer】10- I. 斐波那契数列(研究更本质的一些东西 耗时三小时)

    本节目录刷前点说题目介绍(LINK)思路/想法1. 初始思路/最终思路迭代递归2. 注意点迭代实现递归的代码实现&#xff08;不推荐的&#xff0c;面试不要写这样的代码&#xff01;&#xff09;递归不重复计算&#xff08;推荐的写法 是微软的面试题&#xff09;感谢语刷前点说 以后…...

    2024/4/15 1:21:19
  20. 【字符串】1447. 最简分数

    题目 给你一个整数 n &#xff0c;请你返回所有 0 到 1 之间&#xff08;不包括 0 和 1&#xff09;满足分母小于等于 n 的 最简 分数 。分数可以以 任意 顺序返回。 示例 1&#xff1a;输入&#xff1a;n 2 输出&#xff1a;["1/2"] 解释&#xff1a;"1/2&q…...

    2024/4/8 19:02:16

最新文章

  1. 论文辅助笔记:Tempo 之 model.py

    0 导入库 import math from dataclasses import dataclass, asdictimport torch import torch.nn as nnfrom src.modules.transformer import Block from src.modules.prompt import Prompt from src.modules.utils import (FlattenHead,PoolingHead,RevIN, )1TEMPOConfig 1.…...

    2024/5/4 12:04:38
  2. 梯度消失和梯度爆炸的一些处理方法

    在这里是记录一下梯度消失或梯度爆炸的一些处理技巧。全当学习总结了如有错误还请留言&#xff0c;在此感激不尽。 权重和梯度的更新公式如下&#xff1a; w w − η ⋅ ∇ w w w - \eta \cdot \nabla w ww−η⋅∇w 个人通俗的理解梯度消失就是网络模型在反向求导的时候出…...

    2024/3/20 10:50:27
  3. Mac brew 安装软件

    Mac brew 安装软件 homebrew 速度慢 将brew 切换到国内镜像源 # 速度一般 # 步骤一 cd "$(brew --repo)" git remote set-url origin https://mirrors.tuna.tsinghua.edu.cn/git/homebrew/brew.git# 步骤二 cd "$(brew --repo)/Library/Taps/homebrew/homebr…...

    2024/5/3 9:32:52
  4. 招投标系统简介 企业电子招投标采购系统源码之电子招投标系统 —降低企业采购成本

    功能描述 1、门户管理&#xff1a;所有用户可在门户页面查看所有的公告信息及相关的通知信息。主要板块包含&#xff1a;招标公告、非招标公告、系统通知、政策法规。 2、立项管理&#xff1a;企业用户可对需要采购的项目进行立项申请&#xff0c;并提交审批&#xff0c;查看所…...

    2024/5/2 20:46:00
  5. 【外汇早评】美通胀数据走低,美元调整

    原标题:【外汇早评】美通胀数据走低,美元调整昨日美国方面公布了新一期的核心PCE物价指数数据,同比增长1.6%,低于前值和预期值的1.7%,距离美联储的通胀目标2%继续走低,通胀压力较低,且此前美国一季度GDP初值中的消费部分下滑明显,因此市场对美联储后续更可能降息的政策…...

    2024/5/1 17:30:59
  6. 【原油贵金属周评】原油多头拥挤,价格调整

    原标题:【原油贵金属周评】原油多头拥挤,价格调整本周国际劳动节,我们喜迎四天假期,但是整个金融市场确实流动性充沛,大事频发,各个商品波动剧烈。美国方面,在本周四凌晨公布5月份的利率决议和新闻发布会,维持联邦基金利率在2.25%-2.50%不变,符合市场预期。同时美联储…...

    2024/5/2 16:16:39
  7. 【外汇周评】靓丽非农不及疲软通胀影响

    原标题:【外汇周评】靓丽非农不及疲软通胀影响在刚结束的周五,美国方面公布了新一期的非农就业数据,大幅好于前值和预期,新增就业重新回到20万以上。具体数据: 美国4月非农就业人口变动 26.3万人,预期 19万人,前值 19.6万人。 美国4月失业率 3.6%,预期 3.8%,前值 3…...

    2024/4/29 2:29:43
  8. 【原油贵金属早评】库存继续增加,油价收跌

    原标题:【原油贵金属早评】库存继续增加,油价收跌周三清晨公布美国当周API原油库存数据,上周原油库存增加281万桶至4.692亿桶,增幅超过预期的74.4万桶。且有消息人士称,沙特阿美据悉将于6月向亚洲炼油厂额外出售更多原油,印度炼油商预计将每日获得至多20万桶的额外原油供…...

    2024/5/3 23:10:03
  9. 【外汇早评】日本央行会议纪要不改日元强势

    原标题:【外汇早评】日本央行会议纪要不改日元强势近两日日元大幅走强与近期市场风险情绪上升,避险资金回流日元有关,也与前一段时间的美日贸易谈判给日本缓冲期,日本方面对汇率问题也避免继续贬值有关。虽然今日早间日本央行公布的利率会议纪要仍然是支持宽松政策,但这符…...

    2024/4/27 17:58:04
  10. 【原油贵金属早评】欧佩克稳定市场,填补伊朗问题的影响

    原标题:【原油贵金属早评】欧佩克稳定市场,填补伊朗问题的影响近日伊朗局势升温,导致市场担忧影响原油供给,油价试图反弹。此时OPEC表态稳定市场。据消息人士透露,沙特6月石油出口料将低于700万桶/日,沙特已经收到石油消费国提出的6月份扩大出口的“适度要求”,沙特将满…...

    2024/4/27 14:22:49
  11. 【外汇早评】美欲与伊朗重谈协议

    原标题:【外汇早评】美欲与伊朗重谈协议美国对伊朗的制裁遭到伊朗的抗议,昨日伊朗方面提出将部分退出伊核协议。而此行为又遭到欧洲方面对伊朗的谴责和警告,伊朗外长昨日回应称,欧洲国家履行它们的义务,伊核协议就能保证存续。据传闻伊朗的导弹已经对准了以色列和美国的航…...

    2024/4/28 1:28:33
  12. 【原油贵金属早评】波动率飙升,市场情绪动荡

    原标题:【原油贵金属早评】波动率飙升,市场情绪动荡因中美贸易谈判不安情绪影响,金融市场各资产品种出现明显的波动。随着美国与中方开启第十一轮谈判之际,美国按照既定计划向中国2000亿商品征收25%的关税,市场情绪有所平复,已经开始接受这一事实。虽然波动率-恐慌指数VI…...

    2024/4/30 9:43:09
  13. 【原油贵金属周评】伊朗局势升温,黄金多头跃跃欲试

    原标题:【原油贵金属周评】伊朗局势升温,黄金多头跃跃欲试美国和伊朗的局势继续升温,市场风险情绪上升,避险黄金有向上突破阻力的迹象。原油方面稍显平稳,近期美国和OPEC加大供给及市场需求回落的影响,伊朗局势并未推升油价走强。近期中美贸易谈判摩擦再度升级,美国对中…...

    2024/4/27 17:59:30
  14. 【原油贵金属早评】市场情绪继续恶化,黄金上破

    原标题:【原油贵金属早评】市场情绪继续恶化,黄金上破周初中国针对于美国加征关税的进行的反制措施引发市场情绪的大幅波动,人民币汇率出现大幅的贬值动能,金融市场受到非常明显的冲击。尤其是波动率起来之后,对于股市的表现尤其不安。隔夜美国股市出现明显的下行走势,这…...

    2024/5/2 15:04:34
  15. 【外汇早评】美伊僵持,风险情绪继续升温

    原标题:【外汇早评】美伊僵持,风险情绪继续升温昨日沙特两艘油轮再次发生爆炸事件,导致波斯湾局势进一步恶化,市场担忧美伊可能会出现摩擦生火,避险品种获得支撑,黄金和日元大幅走强。美指受中美贸易问题影响而在低位震荡。继5月12日,四艘商船在阿联酋领海附近的阿曼湾、…...

    2024/4/28 1:34:08
  16. 【原油贵金属早评】贸易冲突导致需求低迷,油价弱势

    原标题:【原油贵金属早评】贸易冲突导致需求低迷,油价弱势近日虽然伊朗局势升温,中东地区几起油船被袭击事件影响,但油价并未走高,而是出于调整结构中。由于市场预期局势失控的可能性较低,而中美贸易问题导致的全球经济衰退风险更大,需求会持续低迷,因此油价调整压力较…...

    2024/4/26 19:03:37
  17. 氧生福地 玩美北湖(上)——为时光守候两千年

    原标题:氧生福地 玩美北湖(上)——为时光守候两千年一次说走就走的旅行,只有一张高铁票的距离~ 所以,湖南郴州,我来了~ 从广州南站出发,一个半小时就到达郴州西站了。在动车上,同时改票的南风兄和我居然被分到了一个车厢,所以一路非常愉快地聊了过来。 挺好,最起…...

    2024/4/29 20:46:55
  18. 氧生福地 玩美北湖(中)——永春梯田里的美与鲜

    原标题:氧生福地 玩美北湖(中)——永春梯田里的美与鲜一觉醒来,因为大家太爱“美”照,在柳毅山庄去寻找龙女而错过了早餐时间。近十点,向导坏坏还是带着饥肠辘辘的我们去吃郴州最富有盛名的“鱼头粉”。说这是“十二分推荐”,到郴州必吃的美食之一。 哇塞!那个味美香甜…...

    2024/4/30 22:21:04
  19. 氧生福地 玩美北湖(下)——奔跑吧骚年!

    原标题:氧生福地 玩美北湖(下)——奔跑吧骚年!让我们红尘做伴 活得潇潇洒洒 策马奔腾共享人世繁华 对酒当歌唱出心中喜悦 轰轰烈烈把握青春年华 让我们红尘做伴 活得潇潇洒洒 策马奔腾共享人世繁华 对酒当歌唱出心中喜悦 轰轰烈烈把握青春年华 啊……啊……啊 两…...

    2024/5/1 4:32:01
  20. 扒开伪装医用面膜,翻六倍价格宰客,小姐姐注意了!

    原标题:扒开伪装医用面膜,翻六倍价格宰客,小姐姐注意了!扒开伪装医用面膜,翻六倍价格宰客!当行业里的某一品项火爆了,就会有很多商家蹭热度,装逼忽悠,最近火爆朋友圈的医用面膜,被沾上了污点,到底怎么回事呢? “比普通面膜安全、效果好!痘痘、痘印、敏感肌都能用…...

    2024/5/4 2:59:34
  21. 「发现」铁皮石斛仙草之神奇功效用于医用面膜

    原标题:「发现」铁皮石斛仙草之神奇功效用于医用面膜丽彦妆铁皮石斛医用面膜|石斛多糖无菌修护补水贴19大优势: 1、铁皮石斛:自唐宋以来,一直被列为皇室贡品,铁皮石斛生于海拔1600米的悬崖峭壁之上,繁殖力差,产量极低,所以古代仅供皇室、贵族享用 2、铁皮石斛自古民间…...

    2024/4/28 5:48:52
  22. 丽彦妆\医用面膜\冷敷贴轻奢医学护肤引导者

    原标题:丽彦妆\医用面膜\冷敷贴轻奢医学护肤引导者【公司简介】 广州华彬企业隶属香港华彬集团有限公司,专注美业21年,其旗下品牌: 「圣茵美」私密荷尔蒙抗衰,产后修复 「圣仪轩」私密荷尔蒙抗衰,产后修复 「花茵莳」私密荷尔蒙抗衰,产后修复 「丽彦妆」专注医学护…...

    2024/4/30 9:42:22
  23. 广州械字号面膜生产厂家OEM/ODM4项须知!

    原标题:广州械字号面膜生产厂家OEM/ODM4项须知!广州械字号面膜生产厂家OEM/ODM流程及注意事项解读: 械字号医用面膜,其实在我国并没有严格的定义,通常我们说的医美面膜指的应该是一种「医用敷料」,也就是说,医用面膜其实算作「医疗器械」的一种,又称「医用冷敷贴」。 …...

    2024/5/2 9:07:46
  24. 械字号医用眼膜缓解用眼过度到底有无作用?

    原标题:械字号医用眼膜缓解用眼过度到底有无作用?医用眼膜/械字号眼膜/医用冷敷眼贴 凝胶层为亲水高分子材料,含70%以上的水分。体表皮肤温度传导到本产品的凝胶层,热量被凝胶内水分子吸收,通过水分的蒸发带走大量的热量,可迅速地降低体表皮肤局部温度,减轻局部皮肤的灼…...

    2024/4/30 9:42:49
  25. 配置失败还原请勿关闭计算机,电脑开机屏幕上面显示,配置失败还原更改 请勿关闭计算机 开不了机 这个问题怎么办...

    解析如下&#xff1a;1、长按电脑电源键直至关机&#xff0c;然后再按一次电源健重启电脑&#xff0c;按F8健进入安全模式2、安全模式下进入Windows系统桌面后&#xff0c;按住“winR”打开运行窗口&#xff0c;输入“services.msc”打开服务设置3、在服务界面&#xff0c;选中…...

    2022/11/19 21:17:18
  26. 错误使用 reshape要执行 RESHAPE,请勿更改元素数目。

    %读入6幅图像&#xff08;每一幅图像的大小是564*564&#xff09; f1 imread(WashingtonDC_Band1_564.tif); subplot(3,2,1),imshow(f1); f2 imread(WashingtonDC_Band2_564.tif); subplot(3,2,2),imshow(f2); f3 imread(WashingtonDC_Band3_564.tif); subplot(3,2,3),imsho…...

    2022/11/19 21:17:16
  27. 配置 已完成 请勿关闭计算机,win7系统关机提示“配置Windows Update已完成30%请勿关闭计算机...

    win7系统关机提示“配置Windows Update已完成30%请勿关闭计算机”问题的解决方法在win7系统关机时如果有升级系统的或者其他需要会直接进入一个 等待界面&#xff0c;在等待界面中我们需要等待操作结束才能关机&#xff0c;虽然这比较麻烦&#xff0c;但是对系统进行配置和升级…...

    2022/11/19 21:17:15
  28. 台式电脑显示配置100%请勿关闭计算机,“准备配置windows 请勿关闭计算机”的解决方法...

    有不少用户在重装Win7系统或更新系统后会遇到“准备配置windows&#xff0c;请勿关闭计算机”的提示&#xff0c;要过很久才能进入系统&#xff0c;有的用户甚至几个小时也无法进入&#xff0c;下面就教大家这个问题的解决方法。第一种方法&#xff1a;我们首先在左下角的“开始…...

    2022/11/19 21:17:14
  29. win7 正在配置 请勿关闭计算机,怎么办Win7开机显示正在配置Windows Update请勿关机...

    置信有很多用户都跟小编一样遇到过这样的问题&#xff0c;电脑时发现开机屏幕显现“正在配置Windows Update&#xff0c;请勿关机”(如下图所示)&#xff0c;而且还需求等大约5分钟才干进入系统。这是怎样回事呢&#xff1f;一切都是正常操作的&#xff0c;为什么开时机呈现“正…...

    2022/11/19 21:17:13
  30. 准备配置windows 请勿关闭计算机 蓝屏,Win7开机总是出现提示“配置Windows请勿关机”...

    Win7系统开机启动时总是出现“配置Windows请勿关机”的提示&#xff0c;没过几秒后电脑自动重启&#xff0c;每次开机都这样无法进入系统&#xff0c;此时碰到这种现象的用户就可以使用以下5种方法解决问题。方法一&#xff1a;开机按下F8&#xff0c;在出现的Windows高级启动选…...

    2022/11/19 21:17:12
  31. 准备windows请勿关闭计算机要多久,windows10系统提示正在准备windows请勿关闭计算机怎么办...

    有不少windows10系统用户反映说碰到这样一个情况&#xff0c;就是电脑提示正在准备windows请勿关闭计算机&#xff0c;碰到这样的问题该怎么解决呢&#xff0c;现在小编就给大家分享一下windows10系统提示正在准备windows请勿关闭计算机的具体第一种方法&#xff1a;1、2、依次…...

    2022/11/19 21:17:11
  32. 配置 已完成 请勿关闭计算机,win7系统关机提示“配置Windows Update已完成30%请勿关闭计算机”的解决方法...

    今天和大家分享一下win7系统重装了Win7旗舰版系统后&#xff0c;每次关机的时候桌面上都会显示一个“配置Windows Update的界面&#xff0c;提示请勿关闭计算机”&#xff0c;每次停留好几分钟才能正常关机&#xff0c;导致什么情况引起的呢&#xff1f;出现配置Windows Update…...

    2022/11/19 21:17:10
  33. 电脑桌面一直是清理请关闭计算机,windows7一直卡在清理 请勿关闭计算机-win7清理请勿关机,win7配置更新35%不动...

    只能是等着&#xff0c;别无他法。说是卡着如果你看硬盘灯应该在读写。如果从 Win 10 无法正常回滚&#xff0c;只能是考虑备份数据后重装系统了。解决来方案一&#xff1a;管理员运行cmd&#xff1a;net stop WuAuServcd %windir%ren SoftwareDistribution SDoldnet start WuA…...

    2022/11/19 21:17:09
  34. 计算机配置更新不起,电脑提示“配置Windows Update请勿关闭计算机”怎么办?

    原标题&#xff1a;电脑提示“配置Windows Update请勿关闭计算机”怎么办&#xff1f;win7系统中在开机与关闭的时候总是显示“配置windows update请勿关闭计算机”相信有不少朋友都曾遇到过一次两次还能忍但经常遇到就叫人感到心烦了遇到这种问题怎么办呢&#xff1f;一般的方…...

    2022/11/19 21:17:08
  35. 计算机正在配置无法关机,关机提示 windows7 正在配置windows 请勿关闭计算机 ,然后等了一晚上也没有关掉。现在电脑无法正常关机...

    关机提示 windows7 正在配置windows 请勿关闭计算机 &#xff0c;然后等了一晚上也没有关掉。现在电脑无法正常关机以下文字资料是由(历史新知网www.lishixinzhi.com)小编为大家搜集整理后发布的内容&#xff0c;让我们赶快一起来看一下吧&#xff01;关机提示 windows7 正在配…...

    2022/11/19 21:17:05
  36. 钉钉提示请勿通过开发者调试模式_钉钉请勿通过开发者调试模式是真的吗好不好用...

    钉钉请勿通过开发者调试模式是真的吗好不好用 更新时间:2020-04-20 22:24:19 浏览次数:729次 区域: 南阳 > 卧龙 列举网提醒您:为保障您的权益,请不要提前支付任何费用! 虚拟位置外设器!!轨迹模拟&虚拟位置外设神器 专业用于:钉钉,外勤365,红圈通,企业微信和…...

    2022/11/19 21:17:05
  37. 配置失败还原请勿关闭计算机怎么办,win7系统出现“配置windows update失败 还原更改 请勿关闭计算机”,长时间没反应,无法进入系统的解决方案...

    前几天班里有位学生电脑(windows 7系统)出问题了&#xff0c;具体表现是开机时一直停留在“配置windows update失败 还原更改 请勿关闭计算机”这个界面&#xff0c;长时间没反应&#xff0c;无法进入系统。这个问题原来帮其他同学也解决过&#xff0c;网上搜了不少资料&#x…...

    2022/11/19 21:17:04
  38. 一个电脑无法关闭计算机你应该怎么办,电脑显示“清理请勿关闭计算机”怎么办?...

    本文为你提供了3个有效解决电脑显示“清理请勿关闭计算机”问题的方法&#xff0c;并在最后教给你1种保护系统安全的好方法&#xff0c;一起来看看&#xff01;电脑出现“清理请勿关闭计算机”在Windows 7(SP1)和Windows Server 2008 R2 SP1中&#xff0c;添加了1个新功能在“磁…...

    2022/11/19 21:17:03
  39. 请勿关闭计算机还原更改要多久,电脑显示:配置windows更新失败,正在还原更改,请勿关闭计算机怎么办...

    许多用户在长期不使用电脑的时候&#xff0c;开启电脑发现电脑显示&#xff1a;配置windows更新失败&#xff0c;正在还原更改&#xff0c;请勿关闭计算机。。.这要怎么办呢&#xff1f;下面小编就带着大家一起看看吧&#xff01;如果能够正常进入系统&#xff0c;建议您暂时移…...

    2022/11/19 21:17:02
  40. 还原更改请勿关闭计算机 要多久,配置windows update失败 还原更改 请勿关闭计算机,电脑开机后一直显示以...

    配置windows update失败 还原更改 请勿关闭计算机&#xff0c;电脑开机后一直显示以以下文字资料是由(历史新知网www.lishixinzhi.com)小编为大家搜集整理后发布的内容&#xff0c;让我们赶快一起来看一下吧&#xff01;配置windows update失败 还原更改 请勿关闭计算机&#x…...

    2022/11/19 21:17:01
  41. 电脑配置中请勿关闭计算机怎么办,准备配置windows请勿关闭计算机一直显示怎么办【图解】...

    不知道大家有没有遇到过这样的一个问题&#xff0c;就是我们的win7系统在关机的时候&#xff0c;总是喜欢显示“准备配置windows&#xff0c;请勿关机”这样的一个页面&#xff0c;没有什么大碍&#xff0c;但是如果一直等着的话就要两个小时甚至更久都关不了机&#xff0c;非常…...

    2022/11/19 21:17:00
  42. 正在准备配置请勿关闭计算机,正在准备配置windows请勿关闭计算机时间长了解决教程...

    当电脑出现正在准备配置windows请勿关闭计算机时&#xff0c;一般是您正对windows进行升级&#xff0c;但是这个要是长时间没有反应&#xff0c;我们不能再傻等下去了。可能是电脑出了别的问题了&#xff0c;来看看教程的说法。正在准备配置windows请勿关闭计算机时间长了方法一…...

    2022/11/19 21:16:59
  43. 配置失败还原请勿关闭计算机,配置Windows Update失败,还原更改请勿关闭计算机...

    我们使用电脑的过程中有时会遇到这种情况&#xff0c;当我们打开电脑之后&#xff0c;发现一直停留在一个界面&#xff1a;“配置Windows Update失败&#xff0c;还原更改请勿关闭计算机”&#xff0c;等了许久还是无法进入系统。如果我们遇到此类问题应该如何解决呢&#xff0…...

    2022/11/19 21:16:58
  44. 如何在iPhone上关闭“请勿打扰”

    Apple’s “Do Not Disturb While Driving” is a potentially lifesaving iPhone feature, but it doesn’t always turn on automatically at the appropriate time. For example, you might be a passenger in a moving car, but your iPhone may think you’re the one dri…...

    2022/11/19 21:16:57