将需要导入的用户以csv格式组织
姓 名 全名 登录名 密码
张,三, 张三, three.zhang,pass01
李,四, 李四, four.li, passo2
王,五, 王五, five.wang, pass03
刘,六, 刘六, six.liu, passo4
赵,七, 赵七, seven.zhao, pass05
执行脚本,使用dsadd 添加域用户
for /f "tokens=1,2,3,4,5 delims=," %a in (users.csv) do dsadd user "cn=%c,ou=cosmos.com,dc=cosmos,dc=com" -samid %d -upn %d@cosmos.com -fn %b -ln %a -pwd %e -disabled no |
for /f "tokens=1,2,3,4,5 delims=," %a in (users.csv) do dsadd user "cn=%c,ou=cosmos.com,dc=cosmos,dc=com" -samid %d -upn %[email protected] -fn %b -ln %a -pwd %e -disabled no
中途注意检查重名,否则导入失败
一个简单的需求,windows服务器设置计划任务,在需要在指定时间段9点到15点执行特定的程序。
但是windows的计划任务不像corntab那样支持设定指定时间区间,只可以设置每小时启动一次。
方法1.启动Bootstrap程序,根据当前时间决定是否继续执行任务
获取当前时间的字符串
SET curr_time=%TIME:~0,-5%
SET curr_time_str=%curr_time::=% |
SET curr_time=%TIME:~0,-5%
SET curr_time_str=%curr_time::=%
获取时间后,逻辑判断是否goto执行相应的代码
完整的代码
@echo off
ECHO "Time Schedule Bootstrap"
SET curr_time=%TIME:~0,-5%
SET curr_time_str=%curr_time::=%
IF %curr_time_str% leq 0900 (GOTO time_cancel) ELSE (
IF %curr_time_str% leq 1500 (GOTO time_exec) ELSE (
GOTO time_cancel
)
)
exit 0
:time_exec
ECHO "Call CMD.exe"
CMD.exe
exit 0
:time_cancel
ECHO "Canceled"
exit 10 |
@echo off
ECHO "Time Schedule Bootstrap"
SET curr_time=%TIME:~0,-5%
SET curr_time_str=%curr_time::=%
IF %curr_time_str% leq 0900 (GOTO time_cancel) ELSE (
IF %curr_time_str% leq 1500 (GOTO time_exec) ELSE (
GOTO time_cancel
)
)
exit 0
:time_exec
ECHO "Call CMD.exe"
CMD.exe
exit 0
:time_cancel
ECHO "Canceled"
exit 10
方法2.设置每天启动一次在指定的小时,需要设置多次,此处跳过
Win10 DISM离线安装.net framework
DISM(Deployment Image Servicing and Management)就是部署映像服务和管理 (DISM.exe) 用于安装、卸载、配置和更新脱机 Windows(R) 映像和脱机 Windows 预安装环境 (Windows PE) 映像中的功能和程序包。
安装.net framework, windows系统安装盘位置为E:盘
Dism /Online /Enable-Feature /FeatureName:netfx3 /Source:E:\sources\sxs |
Dism /Online /Enable-Feature /FeatureName:netfx3 /Source:E:\sources\sxs
其他用途
1.扫描映像,查看映像是否有损坏(有损坏时电脑会遇到许多小问题,比如可能无法更新系统)
Dism /Online /Cleanup-Image /ScanHealth |
Dism /Online /Cleanup-Image /ScanHealth
2.最后是修复系统映像文件
Dism /Online /Cleanup-Image /RestoreHealth |
Dism /Online /Cleanup-Image /RestoreHealth
使用本地源修复镜像,可以是windows安装光盘,或者虚拟光驱加载ISO文件
Dism /Online /Cleanup-Image /RestoreHealth /Source:c:\test\mount\windows /LimitAccess |
Dism /Online /Cleanup-Image /RestoreHealth /Source:c:\test\mount\windows /LimitAccess
PE环境下也可以用ImageFile方式,直接使用windows image格式文件
Dism /Apply-Image /ImageFile:X:sourcesinstall.wim /Index:1 /ApplyDir:C: |
Dism /Apply-Image /ImageFile:X:sourcesinstall.wim /Index:1 /ApplyDir:C:
参考
https://docs.microsoft.com/en-us/windows-hardware/manufacture/desktop/what-is-dism
Anaconda或者miniconda在容器中安装以后,需要手动执行一下
conda init以后才可以激活相应的环境
假设conda的安装目录prefix为
查看init以后的~/.bashrc,发现conda是根据shell的类型执行相应的安装
__conda_setup="$('/opt/conda/bin/conda' 'shell.bash' 'hook' 2> /dev/null)"
if [ $? -eq 0 ]; then
eval "$__conda_setup"
else
if [ -f "/opt/conda/etc/profile.d/conda.sh" ]; then
. "/opt/conda/etc/profile.d/conda.sh"
else
export PATH="/opt/conda/bin:$PATH"
fi
fi
unset __conda_setup |
__conda_setup="$('/opt/conda/bin/conda' 'shell.bash' 'hook' 2> /dev/null)"
if [ $? -eq 0 ]; then
eval "$__conda_setup"
else
if [ -f "/opt/conda/etc/profile.d/conda.sh" ]; then
. "/opt/conda/etc/profile.d/conda.sh"
else
export PATH="/opt/conda/bin:$PATH"
fi
fi
unset __conda_setup
安装完成conda以后,直接执行相同的操作,启动/bin/bash时默认就会激活base环境
Ln -s /opt/conda/etc/profile.d/conda.sh /etc/profile.d/conda.sh
echo ". /opt/conda/etc/profile.d/conda.sh" >> ~/.bashrc #
echo "conda activate base" >> ~/.bashrc
export PATH="/opt/conda/bin:$PATH" |
Ln -s /opt/conda/etc/profile.d/conda.sh /etc/profile.d/conda.sh
echo ". /opt/conda/etc/profile.d/conda.sh" >> ~/.bashrc #
echo "conda activate base" >> ~/.bashrc
export PATH="/opt/conda/bin:$PATH"
如果需要激活其他环境,需要先行配置好虚拟环境
修改
echo "conda activate base" >> ~/.bashrc |
echo "conda activate base" >> ~/.bashrc
为需要的环境即可
Nvidia CUDA开发环境 Docker容器启用显卡
1.准备docker>19.03 环境,配置好nvidia-container-toolkit
2.确定本机已安装的显卡驱动版本,匹配需要的容器版本
3.Pull基础docker镜像,可以从官方或者dockerhub下载
https://ngc.nvidia.com/catalog/containers/nvidia:cuda/tags
https://gitlab.com/nvidia/container-images/cuda
cuda10-py36-conda的Dockerfile
FROM nvidia/cuda:10.0-cudnn7-devel-ubuntu18.04
MAINTAINER Limc <limc@limc.com.cn>
#close frontend
ENV DEBIAN_FRONTEND noninteractive
# add cuda user
# --disabled-password = Don't assign a password
# using root group for OpenShift compatibility
ENV CUDA_USER_NAME=cuda10
ENV CUDA_USER_GROUP=root
# add user
RUN adduser --system --group --disabled-password --no-create-home --disabled-login $CUDA_USER_NAME
RUN adduser $CUDA_USER_NAME $CUDA_USER_GROUP
# Install basic dependencies
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential \
cmake \
git \
wget \
libopencv-dev \
libsnappy-dev \
python-dev \
python-pip \
#tzdata \
vim
# Install conda for python
RUN wget --quiet https://repo.anaconda.com/miniconda/Miniconda3-py37_4.8.2-Linux-x86_64.sh -O ~/miniconda.sh && \
/bin/bash ~/miniconda.sh -b -p /opt/conda && \
rm ~/miniconda.sh
# Set locale
ENV LANG C.UTF-8 LC_ALL=C.UTF-8
ENV PATH /opt/conda/bin:$PATH
RUN ln -s /opt/conda/etc/profile.d/conda.sh /etc/profile.d/conda.sh && \
echo ". /opt/conda/etc/profile.d/conda.sh" >> ~/.bashrc && \
echo "conda activate base" >> ~/.bashrc && \
find /opt/conda/ -follow -type f -name '*.a' -delete && \
find /opt/conda/ -follow -type f -name '*.js.map' -delete && \
/opt/conda/bin/conda clean -afy
# copy entrypoint.sh
#COPY ./entrypoint.sh /entrypoint.sh
# install
#ENTRYPOINT ["/entrypoint.sh"]
# Initialize workspace
COPY ./app /app
# make workdir
WORKDIR /app
# update pip if nesseary
#RUN pip install --upgrade --no-cache-dir pip
# install gunicorn
# RUN pip install --no-cache-dir -r ./requirements.txt
# install use conda
#RUN conda install --yes --file ./requirements.txt
RUN while read requirement; do conda install --yes $requirement; done < requirements.txt
# copy entrypoint.sh
COPY ./entrypoint.sh /entrypoint.sh
# install
ENTRYPOINT ["/entrypoint.sh"]
# switch to non-root user
USER $CUDA_USER_NAME |
FROM nvidia/cuda:10.0-cudnn7-devel-ubuntu18.04
MAINTAINER Limc <[email protected]>
#close frontend
ENV DEBIAN_FRONTEND noninteractive
# add cuda user
# --disabled-password = Don't assign a password
# using root group for OpenShift compatibility
ENV CUDA_USER_NAME=cuda10
ENV CUDA_USER_GROUP=root
# add user
RUN adduser --system --group --disabled-password --no-create-home --disabled-login $CUDA_USER_NAME
RUN adduser $CUDA_USER_NAME $CUDA_USER_GROUP
# Install basic dependencies
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential \
cmake \
git \
wget \
libopencv-dev \
libsnappy-dev \
python-dev \
python-pip \
#tzdata \
vim
# Install conda for python
RUN wget --quiet https://repo.anaconda.com/miniconda/Miniconda3-py37_4.8.2-Linux-x86_64.sh -O ~/miniconda.sh && \
/bin/bash ~/miniconda.sh -b -p /opt/conda && \
rm ~/miniconda.sh
# Set locale
ENV LANG C.UTF-8 LC_ALL=C.UTF-8
ENV PATH /opt/conda/bin:$PATH
RUN ln -s /opt/conda/etc/profile.d/conda.sh /etc/profile.d/conda.sh && \
echo ". /opt/conda/etc/profile.d/conda.sh" >> ~/.bashrc && \
echo "conda activate base" >> ~/.bashrc && \
find /opt/conda/ -follow -type f -name '*.a' -delete && \
find /opt/conda/ -follow -type f -name '*.js.map' -delete && \
/opt/conda/bin/conda clean -afy
# copy entrypoint.sh
#COPY ./entrypoint.sh /entrypoint.sh
# install
#ENTRYPOINT ["/entrypoint.sh"]
# Initialize workspace
COPY ./app /app
# make workdir
WORKDIR /app
# update pip if nesseary
#RUN pip install --upgrade --no-cache-dir pip
# install gunicorn
# RUN pip install --no-cache-dir -r ./requirements.txt
# install use conda
#RUN conda install --yes --file ./requirements.txt
RUN while read requirement; do conda install --yes $requirement; done < requirements.txt
# copy entrypoint.sh
COPY ./entrypoint.sh /entrypoint.sh
# install
ENTRYPOINT ["/entrypoint.sh"]
# switch to non-root user
USER $CUDA_USER_NAME
运行容器Makefile
IMG:=`cat Name`
GPU_OPT:=all
MOUNT_ETC:=
MOUNT_LOG:=
MOUNT_APP:=-v `pwd`/work/app:/app
MOUNT:=$(MOUNT_ETC) $(MOUNT_LOG) $(MOUNT_APP)
EXT_VOL:=
PORT_MAP:=
LINK_MAP:=
RESTART:=no
CONTAINER_NAME:=docker-cuda10-py36-hello
echo:
echo $(IMG)
run:
docker rm $(CONTAINER_NAME) || echo
docker run -d --gpus $(GPU_OPT) --name $(CONTAINER_NAME) $(LINK_MAP) $(PORT_MAP) --restart=$(RESTART) \
$(EXT_VOL) $(MOUNT) $(IMG)
run_i:
docker rm $(CONTAINER_NAME) || echo
docker run -i -t --gpus $(GPU_OPT) --name $(CONTAINER_NAME) $(LINK_MAP) $(PORT_MAP) \
$(EXT_VOL) $(MOUNT) $(IMG) /bin/bash
exec_i:
docker exec -i -t --name $(CONTAINER_NAME) /bin/bash
stop:
docker stop $(CONTAINER_NAME)
rm: stop
docker rm $(CONTAINER_NAME) |
IMG:=`cat Name`
GPU_OPT:=all
MOUNT_ETC:=
MOUNT_LOG:=
MOUNT_APP:=-v `pwd`/work/app:/app
MOUNT:=$(MOUNT_ETC) $(MOUNT_LOG) $(MOUNT_APP)
EXT_VOL:=
PORT_MAP:=
LINK_MAP:=
RESTART:=no
CONTAINER_NAME:=docker-cuda10-py36-hello
echo:
echo $(IMG)
run:
docker rm $(CONTAINER_NAME) || echo
docker run -d --gpus $(GPU_OPT) --name $(CONTAINER_NAME) $(LINK_MAP) $(PORT_MAP) --restart=$(RESTART) \
$(EXT_VOL) $(MOUNT) $(IMG)
run_i:
docker rm $(CONTAINER_NAME) || echo
docker run -i -t --gpus $(GPU_OPT) --name $(CONTAINER_NAME) $(LINK_MAP) $(PORT_MAP) \
$(EXT_VOL) $(MOUNT) $(IMG) /bin/bash
exec_i:
docker exec -i -t --name $(CONTAINER_NAME) /bin/bash
stop:
docker stop $(CONTAINER_NAME)
rm: stop
docker rm $(CONTAINER_NAME)
Entrypoint.sh
set -e
# Add python as command if needed
if [ "${1:0:1}" = '-' ]; then
set -- python "$@"
fi
# Drop root privileges if we are running gunicorn
# allow the container to be started with `--user`
if [ "$1" = 'python' -a "$(id -u)" = '0' ]; then
# Change the ownership of user-mutable directories to gunicorn
for path in \
/app \
/usr/local/cuda/ \
; do
chown -R cuda10:root "$path"
done
set -- su-exec python "$@"
#exec su-exec elasticsearch "$BASH_SOURCE" "$@"
fi
# As argument is not related to gunicorn,
# then assume that user wants to run his own process,
# for example a `bash` shell to explore this image
exec "$@" |
set -e
# Add python as command if needed
if [ "${1:0:1}" = '-' ]; then
set -- python "$@"
fi
# Drop root privileges if we are running gunicorn
# allow the container to be started with `--user`
if [ "$1" = 'python' -a "$(id -u)" = '0' ]; then
# Change the ownership of user-mutable directories to gunicorn
for path in \
/app \
/usr/local/cuda/ \
; do
chown -R cuda10:root "$path"
done
set -- su-exec python "$@"
#exec su-exec elasticsearch "$BASH_SOURCE" "$@"
fi
# As argument is not related to gunicorn,
# then assume that user wants to run his own process,
# for example a `bash` shell to explore this image
exec "$@"
几个注意点
1.显卡运行需要root用户权限,否则会出现以下,
docker: Error response from daemon: OCI runtime create failed: container_linux.go:345
考虑安全性可以在容器内创建新用户并加入到root组
2.本机显卡驱动和CUDA必须匹配官方容器的版本,cudnn则不需要匹配,可以使用多个不同版本的cudnn,但是必须满足显卡要求的使用范围
3.docker运行容器非正常结束时会占用显卡,如果卡死,会造成容器外部无法使用,重启docker-daemon也无效,这时只能重启电脑
完整的源代码
https://github.com/limccn/ultrasound-nerve-segmentation-in-tensorflow/commit/d7de1cbeb641d2fae4f5a78ff590a0254667b398
参考
https://gitlab.com/nvidia/container-images/cuda
升级docker19.03使用nvidia-container-toolkit
docker升级到19.03以后,nvidia将提供原生的显卡支持,只需要安装
nvidia-container-toolkit工具包即可,
不再像使用nvidia-docker/2那样复杂配置,而且不支持用docker-compose
安装步骤
1.确认本机nvidia驱动安装正确,cuda和cudnn配置正常,官方文档说可以不需要在host配置cuda,
2.安装docker,可以参考,主要安装19.03以后的版本
https://docs.docker.com/engine/install/ubuntu/
3.添加nvidia-docker的源
curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | \
sudo apt-key add -
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | \
sudo tee /etc/apt/sources.list.d/nvidia-docker.list
sudo apt-get update |
curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | \
sudo apt-key add -
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | \
sudo tee /etc/apt/sources.list.d/nvidia-docker.list
sudo apt-get update
4.使用以下命令安装nvidia-container-toolkit,重启docker
sudo apt-get install -y nvidia-container-toolkit
#restart docker
sudo systemctl restart docker |
sudo apt-get install -y nvidia-container-toolkit
#restart docker
sudo systemctl restart docker
5.如果本机已安装nvidia-docker2,可以单独完成安装nvidia-container-toolkit,且相互不影响,
官方虽然已经宣布nvidia-docker2 deprecated了,但是继续使用是没问题的
使用上的主要区别
使用nvidia-container-toolkit
#使用nvidia-container-toolkit
docker run --gpus "device=1,2" |
#使用nvidia-container-toolkit
docker run --gpus "device=1,2"
使用nvidia-docker2
#使用nvidia-docker2,已deprecated,但是还能继续用
docker run --runtime=nvidia |
#使用nvidia-docker2,已deprecated,但是还能继续用
docker run --runtime=nvidia
使用nvidia-docker
#使用nvidia-docker
nvidia-docker run |
#使用nvidia-docker
nvidia-docker run
几个坑
1. nvidia-container-toolkit和nvidia-docker2的容器image位置不一样且不通用,如果要混用,需要根据需要选择不同版本的容器
2.nvidia-container-toolkit的多显卡支持目前测试没成功,容器跑最好还是单个显卡吧。可能跟host配置有关
参考
https://docs.nvidia.com/deeplearning/frameworks/user-guide/index.html
https://docs.nvidia.com/ngc/ngc-aws-setup-guide/running-containers.html#preparing-to-run-containers
https://github.com/NVIDIA/nvidia-docker
https://nvidia.github.io/nvidia-docker/
对于只需要经过uplink连接外网,不需要跟交换机内其他端口交换的端口,可以通过隔离端口组方式实现二层物理隔离。
VLAN-1020 10.20.0.0/16 eg1/0/17-eg1/0/18 access
注意:
1.一个端口只能加入到一个端口隔离组
2.不会隔离经由trunk/uplink的交换
1.创建端口隔离组
# 切换系统视图
sys
# 创建物理隔离端口组
[H3C] port-isolate group 1
#切换到GigabitEthernet1/0/17
[H3C] interface GigabitEthernet 1/0/17
#加入端口隔离组1
[H3C-GigabitEthernet1/0/17]port-isolate enable group 1
#打开端口
[H3C-GigabitEthernet1/0/17]undo shutdown
#完成
[H3C-GigabitEthernet1/0/17]quit
#切换到GigabitEthernet1/0/18
[H3C] interface GigabitEthernet 1/0/18
#加入端口隔离组1
[H3C-GigabitEthernet1/0/18]port-isolate enable group 1
#打开端口
[H3C-GigabitEthernet1/0/18]undo shutdown
#完成
[H3C-GigabitEthernet1/0/18]quit
2.管理
#查看端口隔离组
[H3C] display port-isolate group 1
Recent Comments