[WordPress] 子目录下WP建站Nginx反向代理设置
子目录下WP建站Nginx反向代理设置
参考官方文档,两种方式一下都可以实现要求
方法1,rewrite方式,适用php-fpm
location /替换子目录名称/ {
index index.html index.php;
if (-f $request_filename/index.html){
rewrite (.*) $1/index.html break;
}
if (-f $request_filename/index.php){
rewrite (.*) $1/index.php;
}
if (!-f $request_filename){
rewrite (.*) /替换子目录名称/index.php;
}
}
方法2,使用try_files方式
location /替换子目录名称/ {
index index.php;
try_files $uri $uri/ /替换子目录名称/index.php?$args;
}
必须要做的,把wp-admin单独处理
rewrite /替换子目录名称/wp-admin$ $scheme://$host$uri/ permanent;
Win10 DISM离线安装.net framework
Win10 DISM离线安装.net framework
DISM(Deployment Image Servicing and Management)就是部署映像服务和管理 (DISM.exe) 用于安装、卸载、配置和更新脱机 Windows(R) 映像和脱机 Windows 预安装环境 (Windows PE) 映像中的功能和程序包。
安装.net framework, windows系统安装盘位置为E:盘
Dism /Online /Enable-Feature /FeatureName:netfx3 /Source:E:\sources\sxs |
其他用途
1.扫描映像,查看映像是否有损坏(有损坏时电脑会遇到许多小问题,比如可能无法更新系统)
Dism /Online /Cleanup-Image /ScanHealth |
2.最后是修复系统映像文件
Dism /Online /Cleanup-Image /RestoreHealth |
使用本地源修复镜像,可以是windows安装光盘,或者虚拟光驱加载ISO文件
Dism /Online /Cleanup-Image /RestoreHealth /Source:c:\test\mount\windows /LimitAccess |
PE环境下也可以用ImageFile方式,直接使用windows image格式文件
Dism /Apply-Image /ImageFile:X:sourcesinstall.wim /Index:1 /ApplyDir:C: |
参考
https://docs.microsoft.com/en-us/windows-hardware/manufacture/desktop/what-is-dism
FastAPI 使用JWT认证的中间件
FastAPI 使用JWT认证的中间件
fastapi的中间件还是太少,单独开发JWT需要,starlette本身提供认证相关实现,只需要自定义一个AuthenticationBackend即可,本次我们实现使用中间价方式拆包JWT的令牌,获取payload里面的用户信息
私有定义的payload内容格式如下
{ "usid": "SkDQBhEjUfygRSeEBech", //UUID Short "uname": "test user name", //Username "mid":"700010001" // Member ID } |
调用代码
app = FastAPI() app.add_middleware(AuthenticationMiddleware,backend=JWTAuthenticationBackend(secret_key="YOUR_SECRET_KEY")) |
完整的代码
import jwt from starlette.authentication import ( AuthenticationBackend, AuthenticationError, BaseUser, AuthCredentials, UnauthenticatedUser ) class JWTUser(BaseUser): def __init__(self, user_id_short: str, member_number: str, user_name: str,token: str, payload: dict) -> None: self.user_name = user_name self.user_id_short = user_id_short self.member_number = member_number self.token = token self.payload = payload @property def is_authenticated(self) -> bool: return True @property def display_name(self) -> str: return self.user_name ## 会员名处理,加*? @property def display_user_id_short(self) -> str: return self.user_id_short ## 会员id处理 @property def display_member_number(self) -> str: return self.member_number ## 会员号处理,加*? class JWTAuthenticationBackend(AuthenticationBackend): def __init__(self, secret_key: str, algorithm: str = 'HS256', prefix: str = 'JWT', user_name_field:str = 'uname' , user_id_field: str = 'usid',member_number_field: str = 'mid'): self.secret_key = secret_key self.algorithm = algorithm self.prefix = prefix self.user_name_field = user_name_field self.user_id_field = user_id_field self.member_number_field = member_number_field @classmethod def get_token_from_header(cls, authorization: str, prefix: str): """ Parses the Authorization header and returns only the token :param authorization: :return: """ try: scheme, token = authorization.split() except ValueError: raise AuthenticationError('Could not separate Authorization scheme and token') if scheme.lower() != prefix.lower(): raise AuthenticationError(f'Authorization scheme {scheme} is not supported') return token async def authenticate(self, request): if "Authorization" not in request.headers: return None auth = request.headers["Authorization"] token = self.get_token_from_header(authorization=auth, prefix=self.prefix) try: payload = jwt.decode(token, key=self.secret_key, algorithms=self.algorithm) except jwt.InvalidTokenError as e: raise AuthenticationError(str(e)) return AuthCredentials(["jwt_authenticated"]), JWTUser(user_id_short=payload[self.user_id_field], member_number=payload[self.member_number_field],user_name=payload[self.user_name_field],token=token, payload=payload) class JWTWebSocketAuthenticationBackend(AuthenticationBackend): def __init__(self, secret_key: str, algorithm: str = 'HS256', query_param_name: str = 'jwt', user_name_field:str = 'uname' , user_id_field: str = 'usid',member_number_field: str = 'mid'): self.secret_key = secret_key self.algorithm = algorithm self.query_param_name = query_param_name self.user_name_field = user_name_field self.user_id_field = user_id_field self.member_number_field = member_number_field async def authenticate(self, request): if self.query_param_name not in request.query_params: return AuthCredentials(), UnauthenticatedUser() token = request.query_params[self.query_param_name] try: payload = jwt.decode(token, key=self.secret_key, algorithms=self.algorithm) except jwt.InvalidTokenError as e: raise AuthenticationError(str(e)) return AuthCredentials(["jwt_authenticated"]), JWTUser(user_id_short=payload[self.user_id_field], member_number=payload[self.member_number_field],user_name=payload[self.user_name_field],token=token, payload=payload) |
Docker中使用conda不能激活环境问题
Anaconda或者miniconda在容器中安装以后,需要手动执行一下
conda init以后才可以激活相应的环境
假设conda的安装目录prefix为
/opt/conda/ |
查看init以后的~/.bashrc,发现conda是根据shell的类型执行相应的安装
__conda_setup="$('/opt/conda/bin/conda' 'shell.bash' 'hook' 2> /dev/null)" if [ $? -eq 0 ]; then eval "$__conda_setup" else if [ -f "/opt/conda/etc/profile.d/conda.sh" ]; then . "/opt/conda/etc/profile.d/conda.sh" else export PATH="/opt/conda/bin:$PATH" fi fi unset __conda_setup |
安装完成conda以后,直接执行相同的操作,启动/bin/bash时默认就会激活base环境
Ln -s /opt/conda/etc/profile.d/conda.sh /etc/profile.d/conda.sh echo ". /opt/conda/etc/profile.d/conda.sh" >> ~/.bashrc # echo "conda activate base" >> ~/.bashrc export PATH="/opt/conda/bin:$PATH" |
如果需要激活其他环境,需要先行配置好虚拟环境
修改
echo "conda activate base" >> ~/.bashrc |
为需要的环境即可
Nvidia CUDA开发环境 Docker容器启用显卡
Nvidia CUDA开发环境 Docker容器启用显卡
1.准备docker>19.03 环境,配置好nvidia-container-toolkit
2.确定本机已安装的显卡驱动版本,匹配需要的容器版本
3.Pull基础docker镜像,可以从官方或者dockerhub下载
https://ngc.nvidia.com/catalog/containers/nvidia:cuda/tags
https://gitlab.com/nvidia/container-images/cuda
cuda10-py36-conda的Dockerfile
FROM nvidia/cuda:10.0-cudnn7-devel-ubuntu18.04 MAINTAINER Limc <limc@limc.com.cn> #close frontend ENV DEBIAN_FRONTEND noninteractive # add cuda user # --disabled-password = Don't assign a password # using root group for OpenShift compatibility ENV CUDA_USER_NAME=cuda10 ENV CUDA_USER_GROUP=root # add user RUN adduser --system --group --disabled-password --no-create-home --disabled-login $CUDA_USER_NAME RUN adduser $CUDA_USER_NAME $CUDA_USER_GROUP # Install basic dependencies RUN apt-get update && apt-get install -y --no-install-recommends \ build-essential \ cmake \ git \ wget \ libopencv-dev \ libsnappy-dev \ python-dev \ python-pip \ #tzdata \ vim # Install conda for python RUN wget --quiet https://repo.anaconda.com/miniconda/Miniconda3-py37_4.8.2-Linux-x86_64.sh -O ~/miniconda.sh && \ /bin/bash ~/miniconda.sh -b -p /opt/conda && \ rm ~/miniconda.sh # Set locale ENV LANG C.UTF-8 LC_ALL=C.UTF-8 ENV PATH /opt/conda/bin:$PATH RUN ln -s /opt/conda/etc/profile.d/conda.sh /etc/profile.d/conda.sh && \ echo ". /opt/conda/etc/profile.d/conda.sh" >> ~/.bashrc && \ echo "conda activate base" >> ~/.bashrc && \ find /opt/conda/ -follow -type f -name '*.a' -delete && \ find /opt/conda/ -follow -type f -name '*.js.map' -delete && \ /opt/conda/bin/conda clean -afy # copy entrypoint.sh #COPY ./entrypoint.sh /entrypoint.sh # install #ENTRYPOINT ["/entrypoint.sh"] # Initialize workspace COPY ./app /app # make workdir WORKDIR /app # update pip if nesseary #RUN pip install --upgrade --no-cache-dir pip # install gunicorn # RUN pip install --no-cache-dir -r ./requirements.txt # install use conda #RUN conda install --yes --file ./requirements.txt RUN while read requirement; do conda install --yes $requirement; done < requirements.txt # copy entrypoint.sh COPY ./entrypoint.sh /entrypoint.sh # install ENTRYPOINT ["/entrypoint.sh"] # switch to non-root user USER $CUDA_USER_NAME |
运行容器Makefile
IMG:=`cat Name` GPU_OPT:=all MOUNT_ETC:= MOUNT_LOG:= MOUNT_APP:=-v `pwd`/work/app:/app MOUNT:=$(MOUNT_ETC) $(MOUNT_LOG) $(MOUNT_APP) EXT_VOL:= PORT_MAP:= LINK_MAP:= RESTART:=no CONTAINER_NAME:=docker-cuda10-py36-hello echo: echo $(IMG) run: docker rm $(CONTAINER_NAME) || echo docker run -d --gpus $(GPU_OPT) --name $(CONTAINER_NAME) $(LINK_MAP) $(PORT_MAP) --restart=$(RESTART) \ $(EXT_VOL) $(MOUNT) $(IMG) run_i: docker rm $(CONTAINER_NAME) || echo docker run -i -t --gpus $(GPU_OPT) --name $(CONTAINER_NAME) $(LINK_MAP) $(PORT_MAP) \ $(EXT_VOL) $(MOUNT) $(IMG) /bin/bash exec_i: docker exec -i -t --name $(CONTAINER_NAME) /bin/bash stop: docker stop $(CONTAINER_NAME) rm: stop docker rm $(CONTAINER_NAME) |
Entrypoint.sh
set -e # Add python as command if needed if [ "${1:0:1}" = '-' ]; then set -- python "$@" fi # Drop root privileges if we are running gunicorn # allow the container to be started with `--user` if [ "$1" = 'python' -a "$(id -u)" = '0' ]; then # Change the ownership of user-mutable directories to gunicorn for path in \ /app \ /usr/local/cuda/ \ ; do chown -R cuda10:root "$path" done set -- su-exec python "$@" #exec su-exec elasticsearch "$BASH_SOURCE" "$@" fi # As argument is not related to gunicorn, # then assume that user wants to run his own process, # for example a `bash` shell to explore this image exec "$@" |
几个注意点
1.显卡运行需要root用户权限,否则会出现以下,
docker: Error response from daemon: OCI runtime create failed: container_linux.go:345
考虑安全性可以在容器内创建新用户并加入到root组
2.本机显卡驱动和CUDA必须匹配官方容器的版本,cudnn则不需要匹配,可以使用多个不同版本的cudnn,但是必须满足显卡要求的使用范围
3.docker运行容器非正常结束时会占用显卡,如果卡死,会造成容器外部无法使用,重启docker-daemon也无效,这时只能重启电脑
完整的源代码
https://github.com/limccn/ultrasound-nerve-segmentation-in-tensorflow/commit/d7de1cbeb641d2fae4f5a78ff590a0254667b398
参考
https://gitlab.com/nvidia/container-images/cuda
[docker]升级docker19.03使用nvidia-container-toolkit
升级docker19.03使用nvidia-container-toolkit
docker升级到19.03以后,nvidia将提供原生的显卡支持,只需要安装
nvidia-container-toolkit工具包即可,
不再像使用nvidia-docker/2那样复杂配置,而且不支持用docker-compose
安装步骤
1.确认本机nvidia驱动安装正确,cuda和cudnn配置正常,官方文档说可以不需要在host配置cuda,
2.安装docker,可以参考,主要安装19.03以后的版本
https://docs.docker.com/engine/install/ubuntu/
3.添加nvidia-docker的源
curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | \ sudo apt-key add - distribution=$(. /etc/os-release;echo $ID$VERSION_ID) curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | \ sudo tee /etc/apt/sources.list.d/nvidia-docker.list sudo apt-get update |
4.使用以下命令安装nvidia-container-toolkit,重启docker
sudo apt-get install -y nvidia-container-toolkit #restart docker sudo systemctl restart docker |
5.如果本机已安装nvidia-docker2,可以单独完成安装nvidia-container-toolkit,且相互不影响,
官方虽然已经宣布nvidia-docker2 deprecated了,但是继续使用是没问题的
使用上的主要区别
使用nvidia-container-toolkit
#使用nvidia-container-toolkit docker run --gpus "device=1,2" |
使用nvidia-docker2
#使用nvidia-docker2,已deprecated,但是还能继续用 docker run --runtime=nvidia |
使用nvidia-docker
#使用nvidia-docker
nvidia-docker run |
几个坑
1. nvidia-container-toolkit和nvidia-docker2的容器image位置不一样且不通用,如果要混用,需要根据需要选择不同版本的容器
2.nvidia-container-toolkit的多显卡支持目前测试没成功,容器跑最好还是单个显卡吧。可能跟host配置有关
参考
https://docs.nvidia.com/deeplearning/frameworks/user-guide/index.html
https://docs.nvidia.com/ngc/ngc-aws-setup-guide/running-containers.html#preparing-to-run-containers
https://github.com/NVIDIA/nvidia-docker
https://nvidia.github.io/nvidia-docker/
H3C 华三交换机配置物理隔离端口
对于只需要经过uplink连接外网,不需要跟交换机内其他端口交换的端口,可以通过隔离端口组方式实现二层物理隔离。
VLAN-1020 10.20.0.0/16 eg1/0/17-eg1/0/18 access
注意:
1.一个端口只能加入到一个端口隔离组
2.不会隔离经由trunk/uplink的交换
1.创建端口隔离组
# 切换系统视图
# 创建物理隔离端口组
[H3C] port-isolate group 1
#切换到GigabitEthernet1/0/17
[H3C] interface GigabitEthernet 1/0/17
#加入端口隔离组1
[H3C-GigabitEthernet1/0/17]port-isolate enable group 1
#打开端口
[H3C-GigabitEthernet1/0/17]undo shutdown
#完成
[H3C-GigabitEthernet1/0/17]quit
#切换到GigabitEthernet1/0/18
[H3C] interface GigabitEthernet 1/0/18
#加入端口隔离组1
[H3C-GigabitEthernet1/0/18]port-isolate enable group 1
#打开端口
[H3C-GigabitEthernet1/0/18]undo shutdown
#完成
[H3C-GigabitEthernet1/0/18]quit
2.管理
#查看端口隔离组
[H3C] display port-isolate group 1
Recent Comments