简栈文化

Java技术人的成长之路~

虚拟机创建

在自己的Mac系统里面利用Parallels Desktop创建3台虚拟机,具体信息如下:

1
2
3
4
5
6
7
8
9
10
11
CentOS7-Node1:
10.211.55.7
parallels/centos-test

CentOS7-Node2:
10.211.55.8
parallels/centos-test

CentOS7-Node3:
10.211.55.9
parallels/centos-test

Master安装

选择CentOS7-Node1机器作为Master节点。

配置yum

更新yum源:

1
2
3
4
5
6
7
8
[parallels@CentOS7-Node1 yum.repos.d]$ cd /etc/yum.repos.d
[parallels@CentOS7-Node1 yum.repos.d]$ sudo touch kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
安装Kubernetes环境

评估下来,利用kubeadm来搭建是大家比较推荐的,而且公司的集群也是。所以毫不忧虑就用kubeadm。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
[parallels@CentOS7-Node1 yum.repos.d]$ yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
Loaded plugins: fastestmirror, langpacks
You need to be root to perform this command.
[parallels@CentOS7-Node1 yum.repos.d]$ sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
* base: mirrors.aliyun.com
* extras: mirrors.aliyun.com
* updates: mirrors.aliyun.com
kubernetes | 1.4 kB 00:00:00
kubernetes/primary | 58 kB 00:00:00
kubernetes 421/421
Resolving Dependencies
--> Running transaction check
...... # 省略一堆无意义的日志
Dependency Installed:
conntrack-tools.x86_64 0:1.4.4-5.el7_7.2 cri-tools.x86_64 0:1.13.0-0 kubernetes-cni.x86_64 0:0.7.5-0 libnetfilter_cthelper.x86_64 0:1.0.0-10.el7_7.1
libnetfilter_cttimeout.x86_64 0:1.0.0-6.el7_7.1 libnetfilter_queue.x86_64 0:1.0.2-2.el7_2 socat.x86_64 0:1.7.3.2-2.el7

Complete!
关于yum的配置与升级
1
2
yum install -y yum-utils device-mapper-persistent-data lvm2
yum update
启动docker

启动Docker,加入开启机动项:

1
2
3
[parallels@CentOS7-Node1 ~]$ sudo systemctl enable docker && systemctl start docker
[sudo] password for parallels:
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
启动kubelet

启动kubelet,加入开机启动项:

1
2
3
4
5
6
7
8
sudo systemctl enable kubelet && systemctl start kubelet
[parallels@CentOS7-Node1 ~]$ sudo systemctl enable kubelet && systemctl start kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
==== AUTHENTICATING FOR org.freedesktop.systemd1.manage-units ===
Authentication is required to manage system services or units.
Authenticating as: Parallels (parallels)
Password:
==== AUTHENTICATION COMPLETE ===

kubeadm config

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
[parallels@CentOS7-Node1 Workspace]$ kubeadm config print init-defaults
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 1.2.3.4
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: centos7-node1
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.16.0
networking:
dnsDomain: cluster.local
serviceSubnet: 10.96.0.0/12
scheduler: {}
1
kubeadm config print init-defaults > /home/parallels/Workspace/init.default.yaml

配置Docker

首先要安装好Docker环境,请参考之前的 http://www.cyblogs.com/centos7shang-an-zhuang-docker/

Docker的一些相关命令

1
2
3
4
5
yum install docker-ce-18.09.9-3.el7 # 指定版本为18.09.9-3.el7

systemctl status docker
systemctl restart docker
systemctl daemon-reload

下载kubernetes的相关镜像

配置镜像地址,但没什么用。后面还是需要用到国内的镜像:

1
2
echo '{"registry-mirrors":["https://docker.mirrors.ustc.edu.cn"]}' > /etc/docker/daemon.json
# 如果提示没有权限,就手动vim添加进去。然后重启docker服务

查看一下kubernetes依赖的镜像名称以及版本

1
2
3
4
5
6
7
8
9
10
[parallels@CentOS7-Node1 Workspace]$ kubeadm config images list
W1022 13:51:12.550171 19704 version.go:101] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
W1022 13:51:12.550458 19704 version.go:102] falling back to the local client version: v1.16.2
k8s.gcr.io/kube-apiserver:v1.16.2
k8s.gcr.io/kube-controller-manager:v1.16.2
k8s.gcr.io/kube-scheduler:v1.16.2
k8s.gcr.io/kube-proxy:v1.16.2
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.3.15-0
k8s.gcr.io/coredns:1.6.2

如果网络OK,应该直接执行这个命令即可,但实际会报错误。

1
2
3
4
5
6
7
8
[parallels@CentOS7-Node1 Workspace]$ sudo kubeadm config images pull --config=/home/parallels/Workspace/init.default.yaml

# 这里由于网络拉取镜像的问题,基本无法操作,只能先去aliyun获取回来后再修改tag的方式,错误如下。
[parallels@CentOS7-Node1 Workspace]$ sudo kubeadm config images pull --config=/home/parallels/Workspace/init.default.yaml
[sudo] password for parallels:
failed to pull image "k8s.gcr.io/kube-apiserver:v1.16.0": output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
To see the stack trace of this error execute with --v=5 or higher
获取镜像

通过另外一种方式来获取镜像:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
touch kubeadm.sh

#!/bin/bash

KUBE_VERSION=v1.16.0
KUBE_PAUSE_VERSION=3.1
ETCD_VERSION=3.3.15-0
CORE_DNS_VERSION=1.6.2

GCR_URL=k8s.gcr.io
ALIYUN_URL=registry.cn-hangzhou.aliyuncs.com/google_containers

images=(
kube-apiserver:${KUBE_VERSION}
kube-controller-manager:${KUBE_VERSION}
kube-scheduler:${KUBE_VERSION}
kube-proxy:${KUBE_VERSION}
pause:${KUBE_PAUSE_VERSION}
etcd:${ETCD_VERSION}
coredns:${CORE_DNS_VERSION}
)

for imageName in ${images[@]} ; do
docker pull $ALIYUN_URL/$imageName
docker tag $ALIYUN_URL/$imageName $GCR_URL/$imageName
docker rmi $ALIYUN_URL/$imageName
done

拉取镜像

1
2
chmod u+x kubeadm.sh # 添加权限
sudo ./kubeadm.sh

剩下的就是耐心等待……

查看最终本地的镜像

1
2
3
4
5
6
7
8
9
[root@CentOS7-Node1 Workspace]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io/kube-apiserver v1.16.0 b305571ca60a 4 weeks ago 217MB
k8s.gcr.io/kube-proxy v1.16.0 c21b0c7400f9 4 weeks ago 86.1MB
k8s.gcr.io/kube-controller-manager v1.16.0 06a629a7e51c 4 weeks ago 163MB
k8s.gcr.io/kube-scheduler v1.16.0 301ddc62b80b 4 weeks ago 87.3MB
k8s.gcr.io/etcd 3.3.15-0 b2756210eeab 6 weeks ago 247MB
k8s.gcr.io/coredns 1.6.2 bf261d157914 2 months ago 44.1MB
k8s.gcr.io/pause 3.1 da86e6ba6ca1 22 months ago 742kB
1
2
3
4
5
6
7
8
9
10
[parallels@CentOS7-Node1 Workspace]$ sudo kubeadm init --config=init.default.yaml 
[init] Using Kubernetes version: v1.16.0
[preflight] Running pre-flight checks
[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.4. Latest validated version: 18.09
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Swap]: running with swap on is not supported. Please disable swap
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
关闭防火墙

解决掉防火墙的问题,请参阅:http://www.cyblogs.com/centos7cha-kan-he-guan-bi-fang-huo-qiang/

cgroupfs错误
1
detected "cgroupfs" as the Docker cgroup driver

新增:/etc/docker/daemon.json

1
2
3
4
5
6
7
8
9
10
11
12
13
{
"registry-mirrors": [
"https://registry.docker-cn.com"
],
"live-restore": true,
"exec-opts": [
"native.cgroupdriver=systemd" # 修改用户
]
}

# 重新启动Docker
systemctl restart docker
systemctl status docker
禁止swap

还是发现需要禁止掉swap

1
Oct 22 16:35:36 CentOS7-Node1 kubelet[1395]: F1022 16:35:36.065168    1395 server.go:271] failed to run Kubelet: running with swap on is not supported, please disable swap! or set --fail-swap-on flag to false. /proc/swaps contained: [Filename                                Type                Size        Used        Priority /dev/dm-1                               partition        2097148        29952        -1]
1
2
3
swapoff -a
#要永久禁掉swap分区,打开如下文件注释掉swap那一行
sudo vi /etc/fstab

再次启动kubeadm init

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
kubeadm init --config=init.default.yaml

[init] Using Kubernetes version: v1.16.2
...
[preflight] Pulling images required for setting up a Kubernetes cluster
...
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
...
[certs] Using certificateDir folder "/etc/kubernetes/pki"
...
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
...
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.

出现错误了,变更Docker的版本后,继续执行,还是会报错误。

1
2
[ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
[ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
重设kubeadm

这里需要重设kubeadm了。具体操作如下:

1
2
3
kubeadm reset
echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables
echo '1' > /proc/sys/net/ipv4/ip_forward
journalctl查看日志
1
journalctl -xefu kubelet

这里还是会报错,因为之前的

1
2
3
4
5
6
7
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
imageRepository: k8s.gcr.io
kubernetesVersion: v1.16.0
networking:
dnsDomain: cluster.local
serviceSubnet: "10.96.0.0/16"

继续执行init的过程,kubeadm init --config=/home/parallels/Workspace/init.default.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.211.55.7:6443 --token imwj34.ksfiwzj5ga80du0r \
--discovery-token-ca-cert-hash sha256:7ffef85880ed43dd539afa045715f9ad5bef15e904cede96213d6cfd4adb0795

真心不容易,这里一直反反复复执行。只要是images的版本问题以及init的过程容易出错。

验证configmap
1
2
3
4
5
6
7
[root@CentOS7-Node1 ~]# kubectl get -n kube-system configmap   
NAME DATA AGE
coredns 1 5m49s
extension-apiserver-authentication 6 5m53s
kube-proxy 2 5m49s
kubeadm-config 2 5m50s
kubelet-config-1.16 1 5m50s

安装Node,加入集群

安装跟Master一直的基本环境,包括docker,kubelet,kubeadm等,重复上面的动作。

1
2
3
scp root@10.211.55.7:/home/parallels/Workspace/init.default.yaml .
scp root@10.211.55.7:/home/parallels/Workspace/kubeadm.sh .
yum install docker-ce-18.06.3.ce-3.el7

kubeadm命令生成配置文件,创建join-config.yaml,内容如下:

1
2
3
4
5
6
7
8
apiVersion: kubeadm.k8s.io/v1beta2
kind: JoinConfiguration
discovery:
bootstrapToken:
apiServerEndpoint: 10.211.55.7:6443
token: imwj34.ksfiwzj5ga80du0r
unsafeSkipCAVerification: true
tlsBootstrapToken: imwj34.ksfiwzj5ga80du0r

其中,apiServerEndpoint的值来自于Master的服务器地址,这里就是10.211.55.7tokentlsBootstrapToken的值就来自于kubeadm init安装Master的最后一行提示信息。这里一定要注意yaml文件的格式,否则执行会报错误。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
[root@CentOS7-Node2 Workspace]# kubeadm join  --config=join-config.yaml
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Swap]: running with swap on is not supported. Please disable swap
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
[root@CentOS7-Node2 Workspace]# swapoff -a
[root@CentOS7-Node2 Workspace]# kubeadm join --config=join-config.yaml
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.16" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
安装网络插件

去Master机器,执行:

1
2
3
4
[root@CentOS7-Node1 Workspace]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
centos7-node1 NotReady master 154m v1.16.2
centos7-node2 NotReady <none> 2m49s v1.16.2

这里显示的是NotReady状态,是因为还没有安装CNI网络插件。我们选择weave插件来安装。

1
2
3
4
5
6
7
[root@CentOS7-Node1 Workspace]# kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
serviceaccount/weave-net created
clusterrole.rbac.authorization.k8s.io/weave-net created
clusterrolebinding.rbac.authorization.k8s.io/weave-net created
role.rbac.authorization.k8s.io/weave-net created
rolebinding.rbac.authorization.k8s.io/weave-net created
daemonset.apps/weave-net created

验证集群是否安装完成

1
2
3
4
5
6
7
8
9
10
11
12
[root@CentOS7-Node1 Workspace]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-5644d7b6d9-9fr9p 0/1 ContainerCreating 0 172m
coredns-5644d7b6d9-pmpkq 0/1 ContainerCreating 0 172m
etcd-centos7-node1 1/1 Running 0 171m
kube-apiserver-centos7-node1 1/1 Running 0 171m
kube-controller-manager-centos7-node1 1/1 Running 0 171m
kube-proxy-ccnht 1/1 Running 0 21m
kube-proxy-rdq9l 1/1 Running 0 172m
kube-scheduler-centos7-node1 1/1 Running 0 171m
weave-net-6hw26 2/2 Running 0 8m7s
weave-net-qv8vz 2/2 Running 0 8m7s

发现coredns一直处于ContainerCreating的状态。具体的看一下错误信息。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
[root@CentOS7-Node1 Workspace]# kubectl describe pod coredns-5644d7b6d9-9fr9p -n kube-system
Name: coredns-5644d7b6d9-9fr9p
Namespace: kube-system
Priority: 2000000000
Priority Class Name: system-cluster-critical
Node: centos7-node2/10.211.55.8
Start Time: Tue, 22 Oct 2019 20:49:47 +0800
Labels: k8s-app=kube-dns
pod-template-hash=5644d7b6d9
.... # 此处省略一些
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling <unknown> default-scheduler 0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.
Warning FailedScheduling <unknown> default-scheduler 0/2 nodes are available: 2 node(s) had taints that the pod didn't tolerate.
Normal Scheduled <unknown> default-scheduler Successfully assigned kube-system/coredns-5644d7b6d9-9fr9p to centos7-node2
Warning FailedCreatePodSandBox 2m kubelet, centos7-node2 Failed create pod sandbox: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Normal SandboxChanged 119s kubelet, centos7-node2 Pod sandbox changed, it will be killed and re-created.

这里可以看出一些错误:

1
2
Oct 22 10:50:15 CentOS7-Node1 kubelet[7649]: F1022 10:50:15.170550    7649 server.go:196] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", 
Oct 22 10:50:15 CentOS7-Node1 systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a

可以删除掉一个pod的方式让它重新启动:

1
2
[root@CentOS7-Node1 ~]# kubectl delete pod coredns-5644d7b6d9-9fr9p -n kube-system
pod "coredns-5644d7b6d9-9fr9p" deleted

看了太多的文章与博客,发现没有几个写的太完全的,都是写的成功的经验,实际上中间不知道有各种奇怪问题。说句实话,k8s很方便,但是门槛很高,依赖的东西真的太多太多了。特别是版本问题导致的问题,很难解决掉。

最后看一下成功的图片吧

http://static.cyblogs.com/WX20191023-164029@2x.png

常用命令汇总

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
systemctl daemon-reload

systemctl restart kubelet

kubectl get pods -n kube-system

kubectl describe pod coredns-5644d7b6d9-lqtks -n kube-system

kubectl delete pod coredns-5644d7b6d9-qh4bc -n kube-system
# 允许master节点部署pod
kubectl taint nodes --all node-role.kubernetes.io/master-
# 禁止master部署pod
kubectl taint nodes k8s node-role.kubernetes.io/master=true:NoSchedule

kubeadm reset

systemctl enable docker && systemctl start docker

systemctl enable kubelet && systemctl start kubelet

journalctl -xefu kubelet

参考地址

设置yum源

1
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

让yum更新到最新

1
sudo yum update

查看仓库中所有docker的版本

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
[parallels@CentOS7-Node1 ~]$ yum list docker-ce --showduplicates | sort -r
* updates: mirrors.njupt.edu.cn
Loaded plugins: fastestmirror, langpacks
* extras: mirrors.njupt.edu.cn
docker-ce.x86_64 3:19.03.4-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.3-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.2-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.1-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.0-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.9-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.8-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.7-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.6-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.5-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.4-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.3-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.2-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.1-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.0-3.el7 docker-ce-stable
docker-ce.x86_64 18.06.3.ce-3.el7 docker-ce-stable
docker-ce.x86_64 18.06.2.ce-3.el7 docker-ce-stable
docker-ce.x86_64 18.06.1.ce-3.el7 docker-ce-stable
docker-ce.x86_64 18.06.0.ce-3.el7 docker-ce-stable
docker-ce.x86_64 18.03.1.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 18.03.0.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.12.1.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.12.0.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.09.1.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.09.0.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.06.2.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.06.1.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.06.0.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.03.3.ce-1.el7 docker-ce-stable
docker-ce.x86_64 17.03.2.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.03.1.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.03.0.ce-1.el7.centos docker-ce-stable
Determining fastest mirrors
* base: mirrors.aliyun.com
Available Packages

安装docker

1
2
sudo yum install docker-ce #由于repo中默认只开启stable仓库
sudo yum install <FQPN> # 例如:sudo yum install docker-ce-18.09.9-3.el7

启动并加入开机启动项

1
2
sudo systemctl start docker
sudo systemctl enable docker

删除当前的Docker

1
yum remove docker docker-client docker-client-latest docker-common docker-latest docker-latest-logrotate docker-logrotate docker-selinux docker-engine-selinux docker-engine

参考地址:

在搭建Kubernetes环境的时候报了一个错误,顺便温习一下防火墙的知识。

查看防火墙状态

1
2
[parallels@CentOS7-Node1 Workspace]$ sudo firewall-cmd --state
running

停止防火墙

1
2
3
4
[parallels@CentOS7-Node1 Workspace]$ sudo systemctl stop firewalld.service 
[sudo] password for parallels:
[parallels@CentOS7-Node1 Workspace]$ sudo firewall-cmd --state
not running

禁止firewall开机启动

1
systemctl disable firewalld.service

更换yum源

为了加快速度,首先可以更换yum的源

1
[root@iZ94tq694y3Z ghost]#touch /etc/yum.repos.d/gitlab_gitlab-ce.repo

替换内容为:

1
2
3
4
5
[gitlab-ce]
name=Gitlab CE Repository
baseurl=https://mirrors.tuna.tsinghua.edu.cn/gitlab-ce/yum/el$releasever/
gpgcheck=0
enabled=1

安装

用root用户安装:

1
2
3
4
5
6
7
yum install curl openssh-server openssh-clients postfix cronie
service postfix start
chkconfig postfix on

yum makecache
yum install gitlab-ce
gitlab-ctl reconfigure

由于自己的阿里云服务器太渣渣了,所以在这里一直卡主了。看了一下阿里云的服务监控。内存已经爆满了,不得不去花钱升级了一下配置。现在是1CPU and 2GB内存。

继续执行,还是报错了。

1
2
3
4
Running handlers:
There was an error running gitlab-ctl reconfigure:

execute[semodule -i /opt/gitlab/embedded/selinux/rhel/7/gitlab-7.2.0-ssh-keygen.pp] (gitlab::selinux line 20) had an error: Errno::ENOMEM: execute[Guard resource] (dynamically defined) had an error: Errno::ENOMEM: Cannot allocate memory - fork(2)

查看内存

1
2
3
4
5
[root@iZ94tq694y3Z ghost]# free  -m
total used free shared buffers cached
Mem: 1841 1760 80 52 5 76
-/+ buffers/cache: 1678 162
Swap: 0 0 0

解决Cannot allocate memory - fork问题。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
[root@iZ94tq694y3Z swapfile]# mkdir /swapfile
[root@iZ94tq694y3Z swapfile]# cd /swapfile
[root@iZ94tq694y3Z swapfile]# dd if=/dev/zero of=swap bs=1024 count=2000000
2000000+0 records in
2000000+0 records out
2048000000 bytes (2.0 GB) copied, 12.8547 s, 159 MB/s
[root@iZ94tq694y3Z swapfile]# mkswap -f swap
Setting up swapspace version 1, size = 1999996 KiB
no label, UUID=da70ea74-4bac-484a-9c14-2c20e265c267
[root@iZ94tq694y3Z swapfile]# swapon swap
swapon: /swapfile/swap: insecure permissions 0644, 0600 suggested.
[root@iZ94tq694y3Z swapfile]# free -h
total used free shared buffers cached
Mem: 1.8G 1.7G 68M 52M 1.2M 85M
-/+ buffers/cache: 1.6G 155M
Swap: 1.9G 0B 1.9G

再重新执行gitlab-ctl reconfigure成功

1
2
3
4
Running handlers:
Running handlers complete
Chef Client finished, 35/743 resources updated in 01 minutes 31 seconds
gitlab Reconfigured!

继续配置:gitlab-rails console production

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
[root@iZ94tq694y3Z swapfile]# gitlab-rails console production
DEPRECATION WARNING: Passing the environment's name as a regular argument is deprecated and will be removed in the next Rails version. Please, use the -e option instead. (called from require at bin/rails:4)
--------------------------------------------------------------------------------
GitLab: 12.3.5 (2417d5becc7)
GitLab Shell: 10.0.0
PostgreSQL: 10.9
--------------------------------------------------------------------------------
Loading production environment (Rails 5.2.3)
irb(main):001:0>
irb(main):002:0> user = User.where(id:1).first
=> #<User id:1 @root>
irb(main):003:0> 'chg''xxxxxx'
=> "xxxxxx"
irb(main):004:0> user.save!
Enqueued ActionMailer::DeliveryJob (Job ID: c9f8831f-25c1-429c-bc3a-073a2a1e5fb8) to Sidekiq(mailers) with arguments: "DeviseMailer", "password_change", "deliver_now", #<GlobalID:0x00007fa89b887368 @uri=#<URI::GID gid://gitlab/User/1>>
=> true
irb(main):005:0>

配置域名

所有gitlab的配置都在/etc/gitlab/gitlab.rb了里面。我这里只修改了其中的几项:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
web_server['external_users'] = ['root'] #支持的用户

nginx['enable'] = false # 不用自带的nginx,用自己安装的

external_url 'https://gitlab.cyblogs.com' # 配置域名

unicorn['port'] = 8081 # 服务端口号

# 配置邮箱
gitlab_rails['smtp_enable'] = true
gitlab_rails['smtp_address'] = "smtp.sina.com"
gitlab_rails['smtp_port'] = 465
gitlab_rails['smtp_user_name'] = "chengcheng222e@sina.com"
gitlab_rails['smtp_password'] = "xxxxxx"
gitlab_rails['smtp_domain'] = "sina.com"
gitlab_rails['smtp_authentication'] = "plain"
gitlab_rails['smtp_enable_starttls_auto'] = true
gitlab_rails['smtp_tls'] = false

配置nginx:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
[root@iZ94tq694y3Z conf.d]# cat gitlab.cyblogs.com.conf 

# 这里需要通过域名解析到这里来,大家自己去搜索相关的教程
upstream gitlab.cyblogs.com {
server 127.0.0.1:8081;
}

server {
listen 80;
server_name gitlab.cyblogs.com ;

location / {
root html;
index index.html index.htm;
proxy_pass http://gitlab.cyblogs.com;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_redirect HOST default;
}

location /assets {
root /opt/gitlab/embedded/service/gitlab-rails/public;
}

error_page 404 /404.html;
error_page 500 /500.html;
error_page 502 /502.html;

location ~ ^/(404|500|502)(-custom)?\.html$ {
root /opt/gitlab/embedded/service/gitlab-rails/public;
internal;
}
}

GitLab备份和恢复

1
2
# 可以将此命令写入crontab,以实现定时备份
/usr/bin/gitlab-rake gitlab:backup:create

Gitlab完全卸载

1
2
3
4
5
6
# 停止gitlab
gitlab-ctl stop
# 删除
rpm -e gitlab-ee
# 彻底删除
find / -name gitlab | xargs rm -rf

GitLab常用命令

1
2
3
4
5
6
7
8
9
gitlab-ctl start    # 启动所有 gitlab 组件;
gitlab-ctl stop # 停止所有 gitlab 组件;
gitlab-ctl restart # 重启所有 gitlab 组件;
gitlab-ctl status # 查看服务状态;
vim /etc/gitlab/gitlab.rb # 修改gitlab配置文件;
gitlab-ctl reconfigure # 重新编译gitlab的配置;
gitlab-rake gitlab:check SANITIZE=true --trace # 检查gitlab;
gitlab-ctl tail # 查看日志;
gitlab-ctl tail nginx/gitlab_access.log

验证

浏览器输入:http://gitlab.cyblogs.com/users/sign_in

http://static.cyblogs.com/WX20191014-222324@2x.png

还有很多的配置还未配置,有待慢慢研究~

参考地址:

背景

在测试环境,有个同事发现了一个ID插入的时候报了主键冲突。这是一件很奇怪的事情,在大家的理解中,使用了Sequence功能,每个节点的内存拿的ID段应该都是不同的,不可能会出现这个问题。不然这又要颠覆认知了~

思考

  • 是否有人手动插入了一条数据,然后出入的时候手动设置了ID呢?
  • 是否有人手动调整了Sequencevalue呢?
  • 为什么数据库还存在了ID相同,但在不同表的数据呢?是不是多线程写的有毛病啊?

初步排查

  • 确认没有人手动插入ID,都是用程序获取的方式;
  • 那有时间与精力去手动设置Sequencevalue啊,确实谁去没事儿管这个;
  • 数据ID相同数据在不同表,明显是2台不同的项目Node导致的。

总结:确定问题出现了2台机器获取的Sequencevalue范围冲突了。

问题表现出来的确实如此,难道真的要颠覆我们的认知吗?因为问题算比较严重,所以非常的重视。一定要找到问题的原因所在!

具体排查

此时,我们发现代码有一处调整过,就是配置TDDLSequenceinnerStep(内部步长),由原来的1000调整为5000。为什么调整大了,是因为在数据迁移的时候,数据量很大,减少由于ID扩容对数据库操作的时间(其实在这里,可以看出这位开发同学已经非常优秀了,其他地方也一定会非常的注意性能的设计)。

这里我的认知也是,就算我修改内部步长跟其他人不一样,也不会影响Sequence冲突的问题啊,这个Sequence应该会自己保证。不知道大家是不是跟我的想法也一样?

抱着半怀疑SequenceBug问题与一定要解决掉问题的思绪,大家来开始撸源代码了。这才是解决问题的王道~

这里参考的版本是:tddl-sequence-3.2.jar,用的是GroupSequence

找出问题的根源点

第一步会撸nextValue()方法,下面贴一下核心代码。

1
2
3
4
5
newValue = oldValue + outStep; // 新的值就是数据库中老的值 + 外部步长的求和

int affectedRows = stmt.executeUpdate();// 把新的值再更新到数据库中去

return new SequenceRange(newValue + 1, newValue + innerStep);// 该结点的范围就是[newValue + 1, newValue + innerStep]

在这里就初步判断,这里有大坑。如果2个项目的内部步长不一致,范围就会存在交集,问题确实是这个问题导致的,但是这不符合常理,为什么设计者要这么设计?此时的心情就是必须要tddl-sequence撸清楚。

下面把看源码时候不太理解的部分解答清楚。

内部步长与外部步长的关系

1
outStep = innerStep * dscount; // 外部步长 = 内部步长 * sequence所在的数据源个数

这应该算是tddl-sequence里面的一个约定了吧,outStep算是每次修改的sequence里value的步长或者说单元。

一般大家的dscount配置的是1,也就是00库。

步长有调整怎么办?

1
2
3
4
5
6
7
8
9
private boolean check(int index, long value) {
return (value % outStep) == (index * innerStep); // 这里不相等,就意味着outStep有调整过
}
// 如果说我们只有一个dscount,这里的index=0,理论上value就要是outStep的整数倍

adjust = true; // 这里儿我们要配置为true,当发现调整了步长,就会自动调节sequence表了
// 具体如何调整的
newValue = (newValue - newValue % outStep) + outStep + index * innerStep;
// newValue - newValue % outStep 就是把数据缩减到最近一个可以整除outStep的值,然后再加上一个outStep。

回顾问题

回顾到事情上来,具体例子说明:

用我们组的小伙伴画的神图

http://static.cyblogs.com/7ba2efab-2797-4bda-a62c-21a3a3d6b4eb.jpg

解释一下,2个不同的应用一个步长是5000,一个步长是1000。步长大的会覆盖步长小的节点;

数据库的value=1000时候;

projectAoutStep=5000拿到的范围是:[6000, 11000],先获取sequence

projectBoutStep=1000拿到的范围是:[7000, 8000],后获取sequence

那如果步长大的节点先插入了数据并且使用了步长小的节点还未使用的ID值,那后面步长小的结点过来插入的时候就报主键冲突了。

疑问点?

为什么数据库的值是1000,步长是5000的时候。获取的范围是[6000,11000]呢?浪费了5000啊。

这个问题就是由于步长调整导致的,因为sequence要让数据库的值是outStep的整数倍。

创建管理员用户

1
2
3
4
5
6
➜  kubernetes  kubectl patch svc -n kube-system kubernetes-dashboard -p '{"spec":{"type":"NodePort"}}'
service/kubernetes-dashboard patched
➜ kubernetes kubectl create serviceaccount dashboard-admin -n kube-system
serviceaccount/dashboard-admin created
➜ kubernetes kubectl create clusterrolebinding dashboard-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
clusterrolebinding.rbac.authorization.k8s.io/dashboard-cluster-admin created

确定NAME

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
➜  kubernetes  kubectl get secret -n=kube-system
NAME TYPE DATA AGE
attachdetach-controller-token-jxx56 kubernetes.io/service-account-token 3 5d3h
bootstrap-signer-token-9hb7w kubernetes.io/service-account-token 3 5d3h
certificate-controller-token-m8mpc kubernetes.io/service-account-token 3 5d3h
clusterrole-aggregation-controller-token-sb7dv kubernetes.io/service-account-token 3 5d3h
coredns-token-tdchv kubernetes.io/service-account-token 3 5d3h
cronjob-controller-token-2f79z kubernetes.io/service-account-token 3 5d3h
daemon-set-controller-token-svzw7 kubernetes.io/service-account-token 3 5d3h
dashboard-admin-token-mwjwf kubernetes.io/service-account-token 3 61s
default-token-sznp4 kubernetes.io/service-account-token 3 5d3h
deployment-controller-token-qdh74 kubernetes.io/service-account-token 3 5d3h
disruption-controller-token-hd7sb kubernetes.io/service-account-token 3 5d3h
endpoint-controller-token-wnnrr kubernetes.io/service-account-token 3 5d3h
expand-controller-token-jc8ls kubernetes.io/service-account-token 3 5d3h
generic-garbage-collector-token-x2p5z kubernetes.io/service-account-token 3 5d3h
horizontal-pod-autoscaler-token-vf4kn kubernetes.io/service-account-token 3 5d3h
job-controller-token-mtz64 kubernetes.io/service-account-token 3 5d3h
kube-proxy-token-6xgld kubernetes.io/service-account-token 3 5d3h
kubernetes-dashboard-certs Opaque 0 5d3h
kubernetes-dashboard-key-holder Opaque 2 5d3h
kubernetes-dashboard-token-lx9kx kubernetes.io/service-account-token 3 5d3h
namespace-controller-token-8scnl kubernetes.io/service-account-token 3 5d3h
node-controller-token-rh4fk kubernetes.io/service-account-token 3 5d3h
persistent-volume-binder-token-xhwzv kubernetes.io/service-account-token 3 5d3h
pod-garbage-collector-token-7wtzh kubernetes.io/service-account-token 3 5d3h
pv-protection-controller-token-9nqsb kubernetes.io/service-account-token 3 5d3h
pvc-protection-controller-token-59kcr kubernetes.io/service-account-token 3 5d3h
replicaset-controller-token-pq8q9 kubernetes.io/service-account-token 3 5d3h
replication-controller-token-tp9zd kubernetes.io/service-account-token 3 5d3h
resourcequota-controller-token-wm4j6 kubernetes.io/service-account-token 3 5d3h
service-account-controller-token-g2h2r kubernetes.io/service-account-token 3 5d3h
service-controller-token-7qrks kubernetes.io/service-account-token 3 5d3h
statefulset-controller-token-gcrtq kubernetes.io/service-account-token 3 5d3h
token-cleaner-token-swg2m kubernetes.io/service-account-token 3 5d3h
ttl-controller-token-tgwnf kubernetes.io/service-account-token 3 5d3h

获取TOKEN

1
2
3
4
5
6
7
8
9
10
11
12
13
14
➜  kubernetes  kubectl describe secret -n=kube-system dashboard-admin-token-mwjwf
Name: dashboard-admin-token-mwjwf
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name: dashboard-admin
kubernetes.io/service-account.uid: 0c547a29-f000-11e9-a91a-025000000001

Type: kubernetes.io/service-account-token

Data
====
ca.crt: 1025 bytes
namespace: 11 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tbXdqd2YiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiMGM1NDdhMjktZjAwMC0xMWU5LWE5MWEtMDI1MDAwMDAwMDAxIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.cvbCJYR98zNWQeRjW4QmEqVPKD4CxL5EpR7bwEfCZqU_hJiNIKJubIGYWAkbB47waEBFOgIU9Aj98BGqtIAki-eL_kZFVYDIrQGzYQHZVngmCcUwG0u_PKazH9bgU_sfsw9t2_FZv-pD8aiVpGXtbS9EFWpf-VTIrZS-CSlTp0LEgPZLir8Jp_T3X4sbBfgtMbHTzkbz8WCvL_SeWxRIf7o-hLY703KNU4hkbNUxhC2ur73Irp3dSpgyANrS3G3cQjM1Uinh7pJl1ay-gRd0jPCwcZxUW3XKfLqS2-vwIpnYZ_j26Dj9oqDChAIxhK2T6VfBOdpp93AlXzT3_0VSYQ

生成Kubeconfig文件

1
2
3
4
5
6
7
8
9
➜  kubernetes  DASH_TOCKEN=$(kubectl get secret -n kube-system dashboard-admin-token-mwjwf -o jsonpath={.data.token}|base64 -D)
➜ kubernetes kubectl config set-cluster kubernetes --server=https://kubernetes.docker.internal:6443 --kubeconfig=/Users/chenyuan/Tools/Docker/kubernetes/dashbord-admin.conf
Cluster "kubernetes" set.
➜ kubernetes kubectl config set-credentials dashboard-admin --token=$DASH_TOCKEN --kubeconfig=/Users/chenyuan/Tools/Docker/kubernetes/dashbord-admin.conf
User "dashboard-admin" set.
➜ kubernetes kubectl config set-context dashboard-admin@kubernetes --cluster=kubernetes --user=dashboard-admin --kubeconfig=/Users/chenyuan/Tools/Docker/kubernetes/dashbord-admin.conf
Context "dashboard-admin@kubernetes" created.
➜ kubernetes kubectl config use-context dashboard-admin@kubernetes --kubeconfig=/Users/chenyuan/Tools/Docker/kubernetes/dashbord-admin.conf
Switched to context "dashboard-admin@kubernetes".

启动服务验证

1
2
3
kubectl proxy --address='0.0.0.0'  --accept-hosts='^*$'  

访问:http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/

http://static.cyblogs.com/WX20191016-190028@2x.png

Git 基本原理与常用命令

1、设置与帮助

1
2
3
1. git help <command>          # 显示指定命令的help  
2. git config --global user.name "your name"
3. git config --global user.email "your email"

2、修改与提交

1
2
3
4
5
6
7
8
9
10
1. git status                   # 查看工作区状态  
2. git add <file> # 将指定文件修改提交到本地暂存区
3. git add . # 将所有修改过的文件提都交暂存区,不包括删除,"."即表示正则匹配所有字符
4. git add --all # 将所有工作区修改提交到暂存区,包括 delete 掉的文件
5. git add -A # 同上
6. git commit -m"comments" # 将暂存区的内容提交到本地库,并加上备注
7. git commit <file> # 提交暂存区指定文件
8. git commit . # 提交暂存区所有文件
9. git commit -a # 将git add, git rm和git commit等操作都合并在一起做,不包括新建文件
10. git commit -am "comments" # 同上+添加备注

3、撤销与恢复

1
2
3
4
5
1. git checkout  -- <file>     # 抛弃工作区指定文件的修改  
2. git checkoout . # 抛弃工作区所有的修改
3. git reset <file> # 将指定文件从暂存区恢复到工作区
4. git reset -- . # 将所有文件从暂存区恢复到工作区
5. git reset --hard # 恢复最近一次提交过的状态,工作区所有修改被放弃

4、查看提交

1
2
3
4
5
6
7
1. git show            # 显示某次提交的内容  
2. git show $id
3. git log
4. git log <file> # 查看该文件每次提交记录
5. git log -p <file> # 查看每次详细修改内容的diff
6. git log -p -2 # 查看最近两次详细修改内容的diff
7. git log --stat # 查看提交统计信息

5、差异对比

1
2
3
4
5
1. git diff <file>                     # 比较当前文件和暂存区文件差异  
2. git diff <$id1> <$id2> # 比较两次提交之间的差异
3. git diff <branch1>..<branch2> # 在两个分支之间比较
4. git diff --staged # 比较暂存区和版本库差异
5. git diff --stat # 仅仅比较统计信息

6、关于版本

1
2
3
4
5
1. git tag                              #查看版本  
2. git tag [name] #创建版本
3. git tag -d [name] #删除版本
4. git tag -r #查看远程版本
5. git push origin [name] #创建远程版本(本地版本push到远程)

7、关于分支

1
2
3
4
5
6
7
8
9
10
11
1. git branch <new_branch>             # 创建新的分支  
2. git checkout <branch> # 切换到某个分支
3. git checkout -b <new_branch> # 创建新的分支,并且切换过去
4. git branch -v # 查看各个分支最后提交信息
5. git branch -r # 查看远程分支
6. git branch --merged # 查看已经被合并到当前分支的分支
7. git branch --no-merged # 查看尚未被合并到当前分支分支
8. git checkout $id # 把某历史提交checkout出来,无分支信息,切换到其他分支会自动删除
9. git checkout $id -b <new_branch> # 把某历史提交checkout出来,创建成一个分支
10. git branch -d <branch> # 删除某个分支
11. git branch -D <branch> # 强制删除某个分支 (未被合并的分支被删除的时候需要强制)

8、关于远程仓库

1
2
3
4
5
6
1. git remote add origin <remote>       # 添加远程库  
2. git remote -v # 查看远程服务器地址和仓库名称
3. git remote show origin # 查看远程服务器仓库状态
4. git remote rm <repository> # 删除远程仓库
5. git push -u origin master # 客户端首次提交
6. git push -u origin develop # 首次将本地develop分支提交到远程develop分支,并且track

9、跟踪远程库和本地库

1
2
1. git branch --set-upstream master origin/master  
2. git branch --set-upstream develop origin/develop

10、本地支持多个Git服务

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
cd /Users/chenyuan/.ssh
lrwxr-xr-x 1 chenyuan staff 42B May 23 2018 config -> /Users/chenyuan/Dropbox/Mackup/.ssh/config # 这里配置多个git对应关系

➜ .ssh cat config

Host git.coding.net # host
HostName git.coding.net
Port 22
User chengcheng222e
IdentityFile ~/.ssh/chengcheng222e_coding # 这里是coding.net的私钥路径

Host github.com # host
HostName github.com
Port 22
User chenyuan
IdentityFile ~/.ssh/github_rsa # 这里是github.com的私钥路径

11、推荐一个特别好玩的Git游戏~ Githug

Githug 安装和使用方法,一共有55关。

How to install

1
sudo gem install githug
关卡列表
关卡名称 学习内容 Git 命令
第1关 init 初始化仓库 git init
第2关 config 设置用户名和电子邮箱地址 git config
第3关 add 把文件添加到暂存区 git add
第4关 commit 提交 git commit
第5关 clone 克隆远程仓库 git clone
第6关 clone_to_folder 克隆远程仓库,并指定本地目录名 git clone
第7关 ignore 配置不被 Git 管理的文件 vim .gitignore
第8关 include 配置不被 Git 管理的文件 vim .gitignore
第9关 status 查看仓库状态 git status
第10关 number_of_files_committed 查看仓库状态 git status
第11关 rm 删除文件 git rm
第12关 rm_cached 从暂存区中移除文件,系 git add 的逆操作 git rm –cached
第13关 stash 保存而不提交 git stash
第14关 rename 文件改名 git mv
第15关 restructure 整理目录结构
第16关 log 查询日志 git log
第17关 tag 打标签 git tag
第18关 push_tags 把标签推送到远程仓库 git push –tags
第19关 commit_amend 修改最后一次提交 git commit –amend
第20关 commit_in_future 指定提交的日期 git commit –date
第21关 reset 从暂存区中移除文件,系 git add 的逆操作 git reset
第22关 reset_soft 撤销提交,系 git commit 的逆操作 git reset –soft
第23关 checkout_file 撤销对一个文件的修改 git checkout
第24关 remote 查询远程仓库 git remote
第25关 remote_url 查询远程仓库的 URL git remote -v
第26关 pull 从远程仓库拉取更新 git pull
第27关 remote_add 添加远程仓库 git remote
第28关 push 把提交推送到远程仓库 git push
第29关 diff 查看文件被修改的细节 git diff
第30关 blame 查询每一行代码被谁编辑过 git blame
第31关 branch 创建分支 git branch
第32关 checkout 切换分支 git checkout
第33关 checkout_tag 切换到标签 git checkout
第34关 checkout_tag_over_branch 切换到标签 git checkout
第35关 branch_at 在指定的提交处创建分支 git branch
第36关 delete_branch 删除分支 git branch -d
第37关 push_branch 推送分支到远程仓库 git push
第38关 merge 合并分支 git merge
第39关 fetch 从远程仓库抓取数据 git fetch
第40关 rebase 变基合并 git rebase
第41关 repack 重新打包 git repack
第42关 cherry-pick 合并分支上指定的提交 git cherry-pick
第43关 grep 搜索文本 git grep
第44关 rename_commit 修改历史提交的说明 git rebase -i
第45关 squash 把多次提交合并成一次提交 git rebase -i
第46关 merge_squash 合并分支时把多次提交合并成一次提交 git merge –squash
第47关 reorder 调整提交顺序 git rebase -i
第48关 bisect 用二分法定位 bug git bisect
第49关 stage_lines 添加文件的部分行到暂存区 git add –edit
第50关 file_old_branch 查看 Git 上的操作历史 git reflog
第51关 revert 取消已推送到远程仓库的提交 git revert
第52关 restore 恢复被删除的提交 git reset –hard
第53关 conflict 解决冲突
第54关 submodule 把第三方库当作子模块 git submodule
第55关 contribute 捐献

把这个游戏全部通关,你的操作绝对又上N个台阶~

参考地址:

我们如何看SpringBoot的源代码

1、快速生成一个简单的SpringBoot项目

进入地址:https://start.spring.io/ ,点击生成代码即可。

http://static.cyblogs.com/QQ20190612-155510@2x.jpg

2、注解:@SpringBootApplication

一个Web项目,只需要这一行注解。有这么厉害吗?我们一起看看它究竟做了什么?

1
2
3
4
5
6
7
8
@SpringBootApplication
public class SpringBootDemoApplication {

public static void main(String[] args) {
SpringApplication.run(SpringBootDemoApplication.class, args);
}

}

The @SpringBootApplication annotation is equivalent to using @Configuration, @EnableAutoConfiguration, and @ComponentScan with their default attributes

1
2
3
4
5
6
@SpringBootConfiguration
@EnableAutoConfiguration
@ComponentScan(excludeFilters = {
@Filter(type = FilterType.CUSTOM, classes = TypeExcludeFilter.class),
@Filter(type = FilterType.CUSTOM, classes = AutoConfigurationExcludeFilter.class) })
public @interface SpringBootApplication {

看代码,明明是@SpringBootConfiguration呢,怎么说是@Configuration呢?

2.1、注解:@SpringBootConfiguration
1
2
3
@Configuration
public @interface SpringBootConfiguration {
}

然后再进入到里面去,发现竟然是Component注解?是不是非常熟悉呢?

1
2
@Component
public @interface Configuration {

Spring provides further stereotype annotations: @Component, @Service, and @Controller. @Component is a generic stereotype for any Spring-managed component. @Repository, @Service, and @Controller are specializations of @Component for more specific use cases (in the persistence, service, and presentation layers, respectively)。

2.2、注解:@ComponentScan
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
org.springframework.boot.SpringApplication
// 第1步
public ConfigurableApplicationContext run(String... args) {
refreshContext(context);
}
// 第2步
public ConfigurableApplicationContext run(String... args) {
ConfigurableApplicationContext context = null;
Collection<SpringBootExceptionReporter> exceptionReporters = new ArrayList<>();
configureHeadlessProperty();
SpringApplicationRunListeners listeners = getRunListeners(args);
listeners.starting();
try {
ConfigurableEnvironment environment = prepareEnvironment(listeners,
applicationArguments);
configureIgnoreBeanInfo(environment);
Banner printedBanner = printBanner(environment);
context = createApplicationContext();
exceptionReporters = getSpringFactoriesInstances(
SpringBootExceptionReporter.class,
new Class[] { ConfigurableApplicationContext.class }, context);
prepareContext(context, environment, listeners, applicationArguments,
printedBanner);
refreshContext(context);
afterRefresh(context, applicationArguments);
listeners.started(context);
// Called after the context has been refreshed.
callRunners(context, applicationArguments);
}
listeners.running(context);

return context;
}
// 第3步
protected void refresh(ApplicationContext applicationContext) {
Assert.isInstanceOf(AbstractApplicationContext.class, applicationContext);
((AbstractApplicationContext) applicationContext).refresh();
}
// 第4步
org.springframework.context.support.AbstractApplicationContext
public void refresh() throws BeansException, IllegalStateException {
// Invoke factory processors registered as beans in the context.
invokeBeanFactoryPostProcessors(beanFactory);
}
// 第5步
org.springframework.context.support.PostProcessorRegistrationDelegate
for (BeanFactoryPostProcessor postProcessor : beanFactoryPostProcessors) {
if (postProcessor instanceof BeanDefinitionRegistryPostProcessor) {
registryProcessor.postProcessBeanDefinitionRegistry(registry);
}
}
// 第6步
org.springframework.context.annotation.ConfigurationClassPostProcessor
public void postProcessBeanDefinitionRegistry(BeanDefinitionRegistry registry) {
processConfigBeanDefinitions(registry);
}
// 第7步
public void processConfigBeanDefinitions(BeanDefinitionRegistry registry) {
do {
parser.parse(candidates);
}
}
// 第8步
org.springframework.context.annotation.ConfigurationClassParser
protected void processConfigurationClass(ConfigurationClass configClass) throws IOException {
do {
sourceClass = doProcessConfigurationClass(configClass, sourceClass);
}
}
// 第9步
protected final SourceClass doProcessConfigurationClass(ConfigurationClass configClass, SourceClass sourceClass)
throws IOException {
// Process any @PropertySource annotations
for (AnnotationAttributes propertySource : AnnotationConfigUtils.attributesForRepeatable(
sourceClass.getMetadata(), PropertySources.class,
org.springframework.context.annotation.PropertySource.class)) {
if (this.environment instanceof ConfigurableEnvironment) {
processPropertySource(propertySource);
}
else {
logger.info("Ignoring @PropertySource annotation on [" + sourceClass.getMetadata().getClassName() +
"]. Reason: Environment must implement ConfigurableEnvironment");
}
}
// Process any @ComponentScan annotations
Set<AnnotationAttributes> componentScans = AnnotationConfigUtils.attributesForRepeatable(
sourceClass.getMetadata(), ComponentScans.class, ComponentScan.class);
if (!componentScans.isEmpty() &&
!this.conditionEvaluator.shouldSkip(sourceClass.getMetadata(), ConfigurationPhase.REGISTER_BEAN)) {
for (AnnotationAttributes componentScan : componentScans) {
// The config class is annotated with @ComponentScan -> perform the scan immediately
Set<BeanDefinitionHolder> scannedBeanDefinitions =
this.componentScanParser.parse(componentScan, sourceClass.getMetadata().getClassName());
// Check the set of scanned definitions for any further config classes and parse recursively if needed
for (BeanDefinitionHolder holder : scannedBeanDefinitions) {
BeanDefinition bdCand = holder.getBeanDefinition().getOriginatingBeanDefinition();
if (bdCand == null) {
bdCand = holder.getBeanDefinition();
}
if (ConfigurationClassUtils.checkConfigurationClassCandidate(bdCand, this.metadataReaderFactory)) {
parse(bdCand.getBeanClassName(), holder.getBeanName());
}
}
}
}
// Process any @Import annotations
processImports(configClass, sourceClass, getImports(sourceClass), true);
// Process any @ImportResource annotations
AnnotationAttributes importResource =
AnnotationConfigUtils.attributesFor(sourceClass.getMetadata(), ImportResource.class);
if (importResource != null) {
String[] resources = importResource.getStringArray("locations");
Class<? extends BeanDefinitionReader> readerClass = importResource.getClass("reader");
for (String resource : resources) {
String resolvedResource = this.environment.resolveRequiredPlaceholders(resource);
configClass.addImportedResource(resolvedResource, readerClass);
}
}
}
// 第10步
private void processImports(ConfigurationClass configClass, SourceClass currentSourceClass,
Collection<SourceClass> importCandidates, boolean checkForCircularImports) {
String[] importClassNames = selector.selectImports(currentSourceClass.getMetadata());
Collection<SourceClass> importSourceClasses = asSourceClasses(importClassNames);
processImports(configClass, currentSourceClass, importSourceClasses, false);
}

// 第11步
org.springframework.boot.autoconfigure.AutoConfigurationImportSelector
@Override
public String[] selectImports(AnnotationMetadata annotationMetadata) {
AutoConfigurationEntry autoConfigurationEntry = getAutoConfigurationEntry(
autoConfigurationMetadata, annotationMetadata);
}
// 第12步
private List<String> filter(List<String> configurations,
AutoConfigurationMetadata autoConfigurationMetadata) {
for (AutoConfigurationImportFilter filter : getAutoConfigurationImportFilters()) {
invokeAwareMethods(filter);
boolean[] match = filter.match(candidates, autoConfigurationMetadata);
for (int i = 0; i < match.length; i++) {
if (!match[i]) {
skip[i] = true;
candidates[i] = null;
skipped = true;
}
}
}
return new ArrayList<>(result);
}
// 第13步
protected List<AutoConfigurationImportFilter> getAutoConfigurationImportFilters() {
return SpringFactoriesLoader.loadFactories(AutoConfigurationImportFilter.class,
this.beanClassLoader);
}
// 第14步
org.springframework.core.io.support.SpringFactoriesLoader
public static <T> List<T> loadFactories(Class<T> factoryClass, @Nullable ClassLoader classLoader) {
List<String> factoryNames = loadFactoryNames(factoryClass, classLoaderToUse);
return result;
}
// 第15步
private static Map<String, List<String>> loadSpringFactories(@Nullable ClassLoader classLoader) {
try {
Enumeration<URL> urls = (classLoader != null ?
classLoader.getResources(FACTORIES_RESOURCE_LOCATION) :
ClassLoader.getSystemResources(FACTORIES_RESOURCE_LOCATION));
return result;
}
}
public static final String FACTORIES_RESOURCE_LOCATION = "META-INF/spring.factories";

也就是说,其实spring.factoriesspring-core的功能。

看一下spring-boot-autoconfigure项目里面的spring.factories

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
# Initializers
org.springframework.context.ApplicationContextInitializer=\
org.springframework.boot.autoconfigure.SharedMetadataReaderFactoryContextInitializer,\
org.springframework.boot.autoconfigure.logging.ConditionEvaluationReportLoggingListener

# Application Listeners
org.springframework.context.ApplicationListener=\
org.springframework.boot.autoconfigure.BackgroundPreinitializer

# Auto Configuration Import Listeners
org.springframework.boot.autoconfigure.AutoConfigurationImportListener=\
org.springframework.boot.autoconfigure.condition.ConditionEvaluationReportAutoConfigurationImportListener

# Auto Configuration Import Filters
org.springframework.boot.autoconfigure.AutoConfigurationImportFilter=\
org.springframework.boot.autoconfigure.condition.OnBeanCondition,\
org.springframework.boot.autoconfigure.condition.OnClassCondition,\
org.springframework.boot.autoconfigure.condition.OnWebApplicationCondition

# Auto Configure
org.springframework.boot.autoconfigure.EnableAutoConfiguration=\
org.springframework.boot.autoconfigure.admin.SpringApplicationAdminJmxAutoConfiguration,\
org.springframework.boot.autoconfigure.aop.AopAutoConfiguration,\
org.springframework.boot.autoconfigure.amqp.RabbitAutoConfiguration,\
org.springframework.boot.autoconfigure.batch.BatchAutoConfiguration,\
org.springframework.boot.autoconfigure.cache.CacheAutoConfiguration,\
org.springframework.boot.autoconfigure.cassandra.CassandraAutoConfiguration,\
org.springframework.boot.autoconfigure.cloud.CloudServiceConnectorsAutoConfiguration,\
org.springframework.boot.autoconfigure.context.ConfigurationPropertiesAutoConfiguration,\
org.springframework.boot.autoconfigure.context.MessageSourceAutoConfiguration,\
org.springframework.boot.autoconfigure.context.PropertyPlaceholderAutoConfiguration,\
org.springframework.boot.autoconfigure.couchbase.CouchbaseAutoConfiguration,\
org.springframework.boot.autoconfigure.dao.PersistenceExceptionTranslationAutoConfiguration,\
org.springframework.boot.autoconfigure.data.cassandra.CassandraDataAutoConfiguration,\
org.springframework.boot.autoconfigure.data.cassandra.CassandraReactiveDataAutoConfiguration,\
org.springframework.boot.autoconfigure.data.cassandra.CassandraReactiveRepositoriesAutoConfiguration,\
org.springframework.boot.autoconfigure.data.cassandra.CassandraRepositoriesAutoConfiguration,\
org.springframework.boot.autoconfigure.data.couchbase.CouchbaseDataAutoConfiguration,\
org.springframework.boot.autoconfigure.data.couchbase.CouchbaseReactiveDataAutoConfiguration,\
org.springframework.boot.autoconfigure.data.couchbase.CouchbaseReactiveRepositoriesAutoConfiguration,\
org.springframework.boot.autoconfigure.data.couchbase.CouchbaseRepositoriesAutoConfiguration,\
org.springframework.boot.autoconfigure.data.elasticsearch.ElasticsearchAutoConfiguration,\
org.springframework.boot.autoconfigure.data.elasticsearch.ElasticsearchDataAutoConfiguration,\
org.springframework.boot.autoconfigure.data.elasticsearch.ElasticsearchRepositoriesAutoConfiguration,\
org.springframework.boot.autoconfigure.data.jdbc.JdbcRepositoriesAutoConfiguration,\
org.springframework.boot.autoconfigure.data.jpa.JpaRepositoriesAutoConfiguration,\
org.springframework.boot.autoconfigure.data.ldap.LdapRepositoriesAutoConfiguration,\
org.springframework.boot.autoconfigure.data.mongo.MongoDataAutoConfiguration,\
org.springframework.boot.autoconfigure.data.mongo.MongoReactiveDataAutoConfiguration,\
org.springframework.boot.autoconfigure.data.mongo.MongoReactiveRepositoriesAutoConfiguration,\
org.springframework.boot.autoconfigure.data.mongo.MongoRepositoriesAutoConfiguration,\
org.springframework.boot.autoconfigure.data.neo4j.Neo4jDataAutoConfiguration,\
org.springframework.boot.autoconfigure.data.neo4j.Neo4jRepositoriesAutoConfiguration,\
org.springframework.boot.autoconfigure.data.solr.SolrRepositoriesAutoConfiguration,\
org.springframework.boot.autoconfigure.data.redis.RedisAutoConfiguration,\
org.springframework.boot.autoconfigure.data.redis.RedisReactiveAutoConfiguration,\
org.springframework.boot.autoconfigure.data.redis.RedisRepositoriesAutoConfiguration,\
org.springframework.boot.autoconfigure.data.rest.RepositoryRestMvcAutoConfiguration,\
org.springframework.boot.autoconfigure.data.web.SpringDataWebAutoConfiguration,\
org.springframework.boot.autoconfigure.elasticsearch.jest.JestAutoConfiguration,\
org.springframework.boot.autoconfigure.elasticsearch.rest.RestClientAutoConfiguration,\
org.springframework.boot.autoconfigure.flyway.FlywayAutoConfiguration,\
org.springframework.boot.autoconfigure.freemarker.FreeMarkerAutoConfiguration,\
org.springframework.boot.autoconfigure.gson.GsonAutoConfiguration,\
org.springframework.boot.autoconfigure.h2.H2ConsoleAutoConfiguration,\
org.springframework.boot.autoconfigure.hateoas.HypermediaAutoConfiguration,\
org.springframework.boot.autoconfigure.hazelcast.HazelcastAutoConfiguration,\
org.springframework.boot.autoconfigure.hazelcast.HazelcastJpaDependencyAutoConfiguration,\
org.springframework.boot.autoconfigure.http.HttpMessageConvertersAutoConfiguration,\
org.springframework.boot.autoconfigure.http.codec.CodecsAutoConfiguration,\
org.springframework.boot.autoconfigure.influx.InfluxDbAutoConfiguration,\
org.springframework.boot.autoconfigure.info.ProjectInfoAutoConfiguration,\
org.springframework.boot.autoconfigure.integration.IntegrationAutoConfiguration,\
org.springframework.boot.autoconfigure.jackson.JacksonAutoConfiguration,\
org.springframework.boot.autoconfigure.jdbc.DataSourceAutoConfiguration,\
org.springframework.boot.autoconfigure.jdbc.JdbcTemplateAutoConfiguration,\
org.springframework.boot.autoconfigure.jdbc.JndiDataSourceAutoConfiguration,\
org.springframework.boot.autoconfigure.jdbc.XADataSourceAutoConfiguration,\
org.springframework.boot.autoconfigure.jdbc.DataSourceTransactionManagerAutoConfiguration,\
org.springframework.boot.autoconfigure.jms.JmsAutoConfiguration,\
org.springframework.boot.autoconfigure.jmx.JmxAutoConfiguration,\
org.springframework.boot.autoconfigure.jms.JndiConnectionFactoryAutoConfiguration,\
org.springframework.boot.autoconfigure.jms.activemq.ActiveMQAutoConfiguration,\
org.springframework.boot.autoconfigure.jms.artemis.ArtemisAutoConfiguration,\
org.springframework.boot.autoconfigure.groovy.template.GroovyTemplateAutoConfiguration,\
org.springframework.boot.autoconfigure.jersey.JerseyAutoConfiguration,\
org.springframework.boot.autoconfigure.jooq.JooqAutoConfiguration,\
org.springframework.boot.autoconfigure.jsonb.JsonbAutoConfiguration,\
org.springframework.boot.autoconfigure.kafka.KafkaAutoConfiguration,\
org.springframework.boot.autoconfigure.ldap.embedded.EmbeddedLdapAutoConfiguration,\
org.springframework.boot.autoconfigure.ldap.LdapAutoConfiguration,\
org.springframework.boot.autoconfigure.liquibase.LiquibaseAutoConfiguration,\
org.springframework.boot.autoconfigure.mail.MailSenderAutoConfiguration,\
org.springframework.boot.autoconfigure.mail.MailSenderValidatorAutoConfiguration,\
org.springframework.boot.autoconfigure.mongo.embedded.EmbeddedMongoAutoConfiguration,\
org.springframework.boot.autoconfigure.mongo.MongoAutoConfiguration,\
org.springframework.boot.autoconfigure.mongo.MongoReactiveAutoConfiguration,\
org.springframework.boot.autoconfigure.mustache.MustacheAutoConfiguration,\
org.springframework.boot.autoconfigure.orm.jpa.HibernateJpaAutoConfiguration,\
org.springframework.boot.autoconfigure.quartz.QuartzAutoConfiguration,\
org.springframework.boot.autoconfigure.reactor.core.ReactorCoreAutoConfiguration,\
org.springframework.boot.autoconfigure.security.servlet.SecurityAutoConfiguration,\
org.springframework.boot.autoconfigure.security.servlet.SecurityRequestMatcherProviderAutoConfiguration,\
org.springframework.boot.autoconfigure.security.servlet.UserDetailsServiceAutoConfiguration,\
org.springframework.boot.autoconfigure.security.servlet.SecurityFilterAutoConfiguration,\
org.springframework.boot.autoconfigure.security.reactive.ReactiveSecurityAutoConfiguration,\
org.springframework.boot.autoconfigure.security.reactive.ReactiveUserDetailsServiceAutoConfiguration,\
org.springframework.boot.autoconfigure.sendgrid.SendGridAutoConfiguration,\
org.springframework.boot.autoconfigure.session.SessionAutoConfiguration,\
org.springframework.boot.autoconfigure.security.oauth2.client.servlet.OAuth2ClientAutoConfiguration,\
org.springframework.boot.autoconfigure.security.oauth2.client.reactive.ReactiveOAuth2ClientAutoConfiguration,\
org.springframework.boot.autoconfigure.security.oauth2.resource.servlet.OAuth2ResourceServerAutoConfiguration,\
org.springframework.boot.autoconfigure.security.oauth2.resource.reactive.ReactiveOAuth2ResourceServerAutoConfiguration,\
org.springframework.boot.autoconfigure.solr.SolrAutoConfiguration,\
org.springframework.boot.autoconfigure.task.TaskExecutionAutoConfiguration,\
org.springframework.boot.autoconfigure.task.TaskSchedulingAutoConfiguration,\
org.springframework.boot.autoconfigure.thymeleaf.ThymeleafAutoConfiguration,\
org.springframework.boot.autoconfigure.transaction.TransactionAutoConfiguration,\
org.springframework.boot.autoconfigure.transaction.jta.JtaAutoConfiguration,\
org.springframework.boot.autoconfigure.validation.ValidationAutoConfiguration,\
org.springframework.boot.autoconfigure.web.client.RestTemplateAutoConfiguration,\
org.springframework.boot.autoconfigure.web.embedded.EmbeddedWebServerFactoryCustomizerAutoConfiguration,\
org.springframework.boot.autoconfigure.web.reactive.HttpHandlerAutoConfiguration,\
org.springframework.boot.autoconfigure.web.reactive.ReactiveWebServerFactoryAutoConfiguration,\
org.springframework.boot.autoconfigure.web.reactive.WebFluxAutoConfiguration,\
org.springframework.boot.autoconfigure.web.reactive.error.ErrorWebFluxAutoConfiguration,\
org.springframework.boot.autoconfigure.web.reactive.function.client.ClientHttpConnectorAutoConfiguration,\
org.springframework.boot.autoconfigure.web.reactive.function.client.WebClientAutoConfiguration,\
org.springframework.boot.autoconfigure.web.servlet.DispatcherServletAutoConfiguration,\
org.springframework.boot.autoconfigure.web.servlet.ServletWebServerFactoryAutoConfiguration,\
org.springframework.boot.autoconfigure.web.servlet.error.ErrorMvcAutoConfiguration,\
org.springframework.boot.autoconfigure.web.servlet.HttpEncodingAutoConfiguration,\
org.springframework.boot.autoconfigure.web.servlet.MultipartAutoConfiguration,\
org.springframework.boot.autoconfigure.web.servlet.WebMvcAutoConfiguration,\
org.springframework.boot.autoconfigure.websocket.reactive.WebSocketReactiveAutoConfiguration,\
org.springframework.boot.autoconfigure.websocket.servlet.WebSocketServletAutoConfiguration,\
org.springframework.boot.autoconfigure.websocket.servlet.WebSocketMessagingAutoConfiguration,\
org.springframework.boot.autoconfigure.webservices.WebServicesAutoConfiguration,\
org.springframework.boot.autoconfigure.webservices.client.WebServiceTemplateAutoConfiguration

# Failure analyzers
org.springframework.boot.diagnostics.FailureAnalyzer=\
org.springframework.boot.autoconfigure.diagnostics.analyzer.NoSuchBeanDefinitionFailureAnalyzer,\
org.springframework.boot.autoconfigure.jdbc.DataSourceBeanCreationFailureAnalyzer,\
org.springframework.boot.autoconfigure.jdbc.HikariDriverConfigurationFailureAnalyzer,\
org.springframework.boot.autoconfigure.session.NonUniqueSessionRepositoryFailureAnalyzer

# Template availability providers
org.springframework.boot.autoconfigure.template.TemplateAvailabilityProvider=\
org.springframework.boot.autoconfigure.freemarker.FreeMarkerTemplateAvailabilityProvider,\
org.springframework.boot.autoconfigure.mustache.MustacheTemplateAvailabilityProvider,\
org.springframework.boot.autoconfigure.groovy.template.GroovyTemplateAvailabilityProvider,\
org.springframework.boot.autoconfigure.thymeleaf.ThymeleafTemplateAvailabilityProvider,\
org.springframework.boot.autoconfigure.web.servlet.JspTemplateAvailabilityProvider

看到这里,是不是有一点解惑了,为什么我们什么都没有干,它就已经具备了那么多的功能。因为在项目启动的时候,已经就给我们内置了这么多的服务。

3、容器在哪儿启动的?
3.1 为什么是Tomcat默认启动?

我们再回到开始的时候,为什么启动的时候看到了Tomcat的日志呢?

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
Connected to the target VM, address: '127.0.0.1:56945', transport: 'socket'

. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot ::

2019-06-13 09:15:11.818 INFO 16978 --- [ main] c.e.s.SpringBootDemoApplication : Starting SpringBootDemoApplication on bogon with PID 16978 (/Users/chenyuan/Dropbox/Workspaces/IdeaProjects/spring-boot-demo/target/classes started by chenyuan in /Users/chenyuan/Dropbox/Workspaces/IdeaProjects/spring-src-leaning)
2019-06-13 09:15:11.823 INFO 16978 --- [ main] c.e.s.SpringBootDemoApplication : No active profile set, falling back to default profiles: default

// 这里日志显示,有一个embedded的tomcat启动了8080端口
2019-06-13 09:15:13.597 INFO 16978 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(s): 8080 (http)
2019-06-13 09:15:13.644 INFO 16978 --- [ main] o.apache.catalina.core.StandardService : Starting service [Tomcat]
2019-06-13 09:15:13.645 INFO 16978 --- [ main] org.apache.catalina.core.StandardEngine : Starting Servlet engine: [Apache Tomcat/9.0.14]
2019-06-13 09:15:13.653 INFO 16978 --- [ main] o.a.catalina.core.AprLifecycleListener : The APR based Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.path: [/Users/chenyuan/Library/Java/Extensions:/Library/Java/Extensions:/Network/Library/Java/Extensions:/System/Library/Java/Extensions:/usr/lib/java:.]
2019-06-13 09:15:13.752 INFO 16978 --- [ main] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext
2019-06-13 09:15:13.752 INFO 16978 --- [ main] o.s.web.context.ContextLoader : Root WebApplicationContext: initialization completed in 1862 ms
2019-06-13 09:15:14.018 INFO 16978 --- [ main] o.s.s.concurrent.ThreadPoolTaskExecutor : Initializing ExecutorService 'applicationTaskExecutor'
2019-06-13 09:15:14.226 INFO 16978 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8080 (http) with context path ''
2019-06-13 09:15:14.231 INFO 16978 --- [ main] c.e.s.SpringBootDemoApplication : Started SpringBootDemoApplication in 3.007 seconds (JVM running for 3.924)
1
2
# Auto Configure
org.springframework.boot.autoconfigure.EnableAutoConfiguration=\
1
2
3
4
5
6
7
org.springframework.boot.autoconfigure.web.servlet.ServletWebServerFactoryAutoConfiguration
@Bean
@ConditionalOnClass(name = "org.apache.catalina.startup.Tomcat")
public TomcatServletWebServerFactoryCustomizer tomcatServletWebServerFactoryCustomizer(
ServerProperties serverProperties) {
return new TomcatServletWebServerFactoryCustomizer(serverProperties);
}

http://static.cyblogs.com/QQ20190614-085006@2x.jpg

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
<!-- spring-boot-demo/pom.xml -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>

<!-- spring-boot-starter-web/pom.xml -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-tomcat</artifactId>
</dependency>

<!-- spring-boot-starters/spring-boot-starter-tomcat/pom.xml -->
<dependency>
<groupId>org.apache.tomcat.embed</groupId>
<artifactId>tomcat-embed-core</artifactId>
<exclusions>
<exclusion>
<groupId>org.apache.tomcat</groupId>
<artifactId>tomcat-annotations-api</artifactId>
</exclusion>
</exclusions>
</dependency>
3.2 如果要指定其他的容器呢?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
<exclusions>
<exclusion>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-tomcat</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-jetty</artifactId>
</dependency>
</dependencies>

把项目中Tomca的starter注释掉,引入Jetty容器即可。

这么更换后,为什么就又能引入了呢?

1
2
3
4
5
6
7
8
9
10
org.springframework.boot.autoconfigure.web.servlet.ServletWebServerFactoryConfiguration
@Configuration
@ConditionalOnClass({ Servlet.class, Server.class, Loader.class,WebAppContext.class })
@ConditionalOnMissingBean(value = ServletWebServerFactory.class, search = SearchStrategy.CURRENT)
public static class EmbeddedJetty {
@Bean
public JettyServletWebServerFactory JettyServletWebServerFactory() {
return new JettyServletWebServerFactory();
}
}

The @ConditionalOnClass and @ConditionalOnMissingClass annotations let @Configuration classes be included based on the presence or absence of specific classes.

这里的Servlet.class, Server.class, Loader.class,WebAppContext.class 就是Jetty里面的包。

这里还设计了一个工厂模式,获取一个WebServer。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
@FunctionalInterface
public interface ServletWebServerFactory {

/**
* Gets a new fully configured but paused {@link WebServer} instance. Clients should
* not be able to connect to the returned server until {@link WebServer#start()} is
* called (which happens when the {@link ApplicationContext} has been fully
* refreshed).
* @param initializers {@link ServletContextInitializer}s that should be applied as
* the server starts
* @return a fully configured and started {@link WebServer}
* @see WebServer#stop()
*/
WebServer getWebServer(ServletContextInitializer... initializers);

}
3.3 那是在什么时候初始化容器的呢?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
// 第1步
org.springframework.context.support.AbstractApplicationContext
public void refresh() throws BeansException, IllegalStateException {
synchronized (this.startupShutdownMonitor) {
try {
// Initialize other special beans in specific context subclasses.
onRefresh();
}
}
// 第2步
org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext
protected void onRefresh() {
super.onRefresh();
try {
createWebServer();
}
catch (Throwable ex) {
throw new ApplicationContextException("Unable to start web server", ex);
}
}
// 第3步
private void createWebServer() {
WebServer webServer = this.webServer;
ServletContext servletContext = getServletContext();
if (webServer == null && servletContext == null) {
ServletWebServerFactory factory = getWebServerFactory();
this.webServer = factory.getWebServer(getSelfInitializer());
}
else if (servletContext != null) {
try {
getSelfInitializer().onStartup(servletContext);
}
catch (ServletException ex) {
throw new ApplicationContextException("Cannot initialize servlet context",
ex);
}
}
initPropertySources();
}

在看到factory.getWebServer,是不是就全部都串联起来了?

1、问:

1.1、开发岗与算法岗的选择

听同学说,开发岗相对于算法岗没有那么看重学历,更看重技术。而学习开发不如不读研,工作三年在公司的学习肯定比读研学习三年要好。算法的话,一般本科生是不会接触的,至少要研究生,这就可以发挥读研的优势,但是竞争也会更激烈,岗位相对较少,要求更高。

我自己对于算法,只看过一些最基础的机器学习视频。对于开发,本科学过JavaC,C++,有一定的编程能力。但是因为没有做过项目,没有过实习经验,对于一些业务,框架,设计模式,优化等几乎没有概念,也不知道在公司工作具体是要做些什么,对开发人员的要求是什么。

因为对这两者,都没有进行过深入的学习,所以也谈不上喜欢和不喜欢。所以也有人问,你喜欢哪个方向就可以尝试着去学,但我就是觉得在对一个东西不了解的情况下,谈不上好恶。

1.2、前端后台的学习方法

因为一直以来受到的教育就是从0一步步学起,包括高考和考研,那是一种教科书式的学习。一直以来我也是用的这一套学习方法来学习开发,发现四处碰壁。比如我在完全不会前端的情况下,就从最基础的HTMLCSS学起,发现这些知识很简单,但是又特别多,特别杂乱,没办法坚持学下去。学完以后还要学JSES6语法,然后学习React框架这些。这样一步步学起了,已经不可能有那么多时间弄别的了,而且学习效率也是极低。

后来要同学告诉我要转变这些学习方法,要根据需求来学习,需要什么就学什么,要我好好利用Github,各大博客,好好学习别人的代码,和源码,然后自己尝试做一些项目,边做边学习,边总结。但是在这过程中也是困难重重,比如Github上看别人代码也看不太懂,自己做项目,也不知道拍脑袋做个什么项目出来,脑子是空白的。

之所以说前端,是因为好像看到很多做开发的同学,他们都是前端后台通吃,他们说都需要了解,竞争力会强一点,做全栈。如果倾向于做后台的话,前端大概要学习到什么程度。

我们实验室听说还有人去了华为的数据库组,完全不知道专门的数据库组是做什么的。

1.3. 对于acm和leetcode刷题

本科唯一学过一段时间的,就是刷这些算法题。这些究竟对于公司做项目有没有用,还是说只是锻炼了逻辑思维,是能力的一种体现。如果为以后准备面试笔试,需不需要从现在开始每天刷几道题保持手感。

1.4. 开发岗的行情

这个差不多是问题1的详细。师兄的公司或者互联网公司,大多数真的是996这样的工作时间吗。另外它的工资水平和发展前景是什么样的,据说有按等级划分。还有同学说过程序员吃青春饭这一套,没有晋升到管理层,就容易被裁掉一说。做开发真的很辛苦吗。

总结下来,就是想向师兄了解一下,两种岗位的选择,开发岗的行情,开发的学习方法。

2、答

首先值得肯定,写的很棒,是带思考的。

“思想比行动重要”这点是我最近几年才想明白的,以前我都觉得行动比思想重要,不喜欢天马行空。我们都是工科思维,讲行动、讲落地。但忽略一个前提,我们需要一个好的思想去引导。如果养成一个独立思考的能力,总能从不同的视角去看待问题,抛开其他不说,你就已经走在了所有人去前列。

回到你的问题,我站在我个人的角度谈谈,但最终你自己来判断。

2.1、关于开发岗与算法岗

看了描述里面应该算是2个问题,一个是开发工程师与算法工程师的门槛。另外一个是读研与直接参加工作带来的效益谁好?

首先开发与算法工程师做的事情是不同的。不得不是说,算法工程师对于学历、专业深度会更深入。

**算法工程师:**可能更多偏向于数学、物理、生物等,只是他用计算机语言来实现。

**开发工程师:**需要理解业务、产品、以及架构能力,还有对于框架的应用,考虑更多的时候如何利用更多的工具造出更大、更高的建筑;

2.2、前端与后端的学习

在几年前,我们还在说前端没有什么搞头,还是后端的路子多一些,确实如此,后端才是这课树下面的根。但随着计算机技术的发展,现在前端也变得越来越强、越来越多样化。包括在写法与语法上都已经跟后端相似了。

**后端:**注重系统的并发、高可用、稳定、安全、业务的分割、问题的定位、大数据分析等等;

**前端:**注重渲染、交互、动画等等(前端我不是很专业,所以说不出太多所以然)

如果说按照薪水来划分,现在前端反而优势更明显,但是后面会稍微的遇到一些屏障,但不会太大。

再谈你说好多东西需要学习,那是肯定的。后端需要掌握的东西更多,你到后面会有一种感觉那就是:知识永远学习不完,了解底层的东西才是重要的。而且要知其然还要知其所以然。还有你会觉得知道的越多,越觉得自己了解的少。

在现在的行情里面,前后通吃是有必要的,因为前后是需要相互配合的。只有了解前端或者后端是如何做的,在沟通上或者设计上才会考虑的更全面。哪天真的如果自己要创业或者需要前后都自己上,那也显得很从容嘛!

2.3、关于刷题

对于刚刚毕业的同学,这个是很重要的。因为缺少社会经验与项目经验,那么在一些原理、理论上的知识显得格外重要。刷题跟做试卷类似,做的越多,知道的也就越多。而且面试官也很看重这些东西,如果去BAT或者TMD这些公司,这些是必然的过关内容。

2.4、职业生涯的发展

如果是走技术,前期基本都是一样的,后期就会因人而异。

**基本的一个成长历程:**初级开发→中级开发→高级开发→资深/专家开发→架构师→技术总监→CTO。

在高级或者资深这里,就会有分歧,这个也要看性格。

**纯技术:**有的人就喜欢钻研技术、喜欢算法等,想下沉到最底层去,那就走纯技术路线,但道路会非常的孤独,如果真的是有所突破,那就是一下子出人头地。

**偏管理:**首先是自己有一定的技术沉淀,后面就开始带领团队做事情了,更多的是跟人打交道,如何合理的分配、拆分需求等。对公司技术发展的整体规划,包括:技术、业务、人力等。

对于薪水,我觉得你完全不用担心,每个岗位都有他的平均水平。而且薪水与能力直接挂钩,与年纪没有必然的关系。然而,我觉得出来工作或者自己创业,都非常辛苦。天上不会掉馅饼,拿一份不辛苦还能那很多的钱的差事儿理论上不应该有,除非是富二代。但是,后期我们如果能做一些理财,我们的收入不完全靠一份薪水,我们也许不会那么累。

关于996,这个话题我也不好说。加班是很辛苦,确实也会带来一些成长,但我们不都会要求强制加班,相对还是比较自由。部分公司,加班是有加班费用或者补贴的。没结婚之前,估计对996没有那么介意,如果有家庭后,需要平衡家庭与工作的关系,可能会介意。

2.5、总结

上面说了这些,都是针对你的问题做的一个回复。但任何东西都会存在一些方法,我把我觉得参加工作后的一些心得分享给你,希望对你有一定的帮助,还是那句话:我瞎说,你自己来判断。

1、无论在哪儿,我们最终一定都是解决一个问题,避免不了跟人打交道,所以认识人是到一个新环境里面最先要做的事情,后面做的就是如何让人认识你了;

2、要逐步逐步的认识自己,这点很难却很重要。因为,自己大多的不开心或者郁闷,都是对于自己不认识造成的,比如:自己找不到目标;做的事情开始喜欢后面就不喜欢了;为什么做事情不够洒脱,顾着顾拿。总之就是让自己不盲目、越快的做事情;

3、人生是分阶段的,每个阶段做好每个阶段的事情就好了。但每个阶段时间长短是可以控制的,看如何突破自己的认知,一般可以找一个偶像作为目标,或者通过阅读来突破自己;

4、专业知识要成体系,不能是零零碎碎的,串联不起来。有的人是广度优先,后面找出一个喜欢的,在深度进去;还有人是深度优先,在某一个专业先扎根,然后再在周边扩展自己的广度。这个我觉得都行,看每个人的性格或者看当前你是否找到了喜好。

5、要学习的东西真的很多,但底层的东西并没有那么多。但很难说一开始就扎入到底层去,因为是有门槛的。大多数人都是先学会使用,知道一个API是如何如何的~ 后面才会去阅读源代码,跟其他同类型的做对比。看源代码是一个非常有意思的事情,看的越多越觉得简单。不过,刚刚开始肯定是相当痛苦的。

6、实践是检验真理的唯一标准!没错啊,很多东西没有捷径,那就是多做。对于开发来说,那就是多写代码,看一遍与敲一遍,完全不是同一个概念。

7、时间管理,到后面每个人的时间都是零碎的,被打断的。如何留给自己足够的时间是一门学门,可以多学习一些时间管理的方法;

8、保持一些原则性的东西会让自己活得简单些,符合的我就要,不符合的就不碰他。比如:

  • 尊重每一个人的意见;
  • 拥有开放的心态,不要先排斥;
  • 拥有较强的业务,技术的专业度;
  • 拆分与总结能力;
  • 良好的沟通能力;
  • 一定的创新能力;

每个人的人生道路轨迹都一样,自己走的路自己来把握。希望你可以走出一条星光大道出来。

仅供你参考。

0%