{ "version": "https://jsonfeed.org/version/1", "title": "涛声依旧", "subtitle": "天下事有难易乎?为之,则难者亦易矣", "icon": "https://hitoli.com/images/favicon.ico", "description": "天生我材必有用", "home_page_url": "https://hitoli.com", "items": [ { "id": "https://hitoli.com/2025/02/16/%E6%9E%81%E7%A9%BA%E9%97%B4%E9%83%A8%E7%BD%B2Deepseek%EF%BC%88API%E8%B0%83%E7%94%A8%EF%BC%89/", "url": "https://hitoli.com/2025/02/16/%E6%9E%81%E7%A9%BA%E9%97%B4%E9%83%A8%E7%BD%B2Deepseek%EF%BC%88API%E8%B0%83%E7%94%A8%EF%BC%89/", "title": "极空间部署DeepSeek(API调用)", "date_published": "2025-02-16T14:32:00.000Z", "content_html": "

# 前言

\n

最近 DeepSeek 比较火爆,到处都在夸。大部分人都想搭建自己的私有服务器,但我查了查资料还是放弃了想法,毕竟 32B 以下的没啥鸟用,以上的我又搞不起。🙂,最后又经不住诱惑就搞一个调用 api 的吧。经过研究,最终决定使用 justsong/one-api 加 vinlic/deepseek-free-api 加 yidadaa/chatgpt-next-web 的方案。justsong/one-api 用于统一管理、调用 AI 服务(当然也可以调用自己搭建的 DeepSeek🙂),vinlic/deepseek-free-api 用于调用 DeepSeek 官方接口,yidadaa/chatgpt-next-web 提供 web 界面用户和 ai 进行对话。大致了解清楚后就开始吧!

\n

# 前期准备

\n

需要自己先申请一个 DeepSeek 账号,并获取 token。获取方法是登陆后随便说一句话,然后按 F12 到应用程序 - 存储 - 本地存储中找到 userToken 的 value 值。
\n\"2025-02-16-23-54-54.png\"

\n
# 安装 deepseek-free-api
\n\n

\"2025-02-16-23-52-13.png\"

\n\n
\n

增加参数 DEEP_SEEK_CHAT_AUTHORIZATION 值:userToken 的 value 值

\n
\n
# 安装 one-api
\n\n
\n

登陆 web 页面(极空间 ip+one-api 的本地端口,默认用户名密码 root/123456),先添加渠道,再添加令牌。渠道用于调用指定的 api,我这里调用的是 deepseek-free-api(密钥为 userToken 的 value 值)。令牌用于给 chatgpt-next-web 使用,记录、控制权限等。

\n
\n

\"c200886ba8c9.jpg\"

\n

\"2025-02-17-00-01-17.png\"

\n
# 安装 chatgpt-next-web
\n\n
\n

参数 OPENAI_API_KEY 的值填 one-api 令牌页面复制的值。再增加一个 BASE_URL 参数,值为 one-api 的地址(极空间 ip+one-api 的本地端口)

\n
\n

\"2025-02-17-00-04-28.png\"
\n\"2025-02-17-00-03-04.png\"

\n\n
\n

恭喜你,至此已经搭建完成。通过极空间 ip+chatgpt-next-web 的本地端口就可以开始和 DeepSeek 对话了。

\n
\n

\"2025-02-17-00-06-07.png\"

\n
# nginx proxy manager 配置
\n
\n

chatgpt-next-web 反代时需要在高级里加入以下代码,否则会报错。

\n
\n

1
2
3
4
5
6
location / {
       proxy_pass http://极空间ip:chatgpt-next-web的本地端口;
proxy_set_header Host $host;
       proxy_set_header X-Forwarded-Host $http_host;
proxy_set_header X-Forwarded-Proto $scheme;
}

\n 最终成果
\n\"0d5ddc77d244fcc4e1a60a1828eed054.png\"

\n", "tags": [ "极空间", "Docker", "DeepSeek", "AI" ] }, { "id": "https://hitoli.com/2024/09/29/dante-stunnel-clash-%E7%A7%91%E5%AD%A6%E4%B8%8A%E7%BD%91/", "url": "https://hitoli.com/2024/09/29/dante-stunnel-clash-%E7%A7%91%E5%AD%A6%E4%B8%8A%E7%BD%91/", "title": "dante+stunnel+clash 科学上网", "date_published": "2024-09-29T08:25:00.000Z", "content_html": "

# 前言

\n

上次介绍了使用 squid+stunnel 的方案进行科学上网。但是那个方案只适合在浏览器上使用,并不能让其它 app 也可以科学上网。此次介绍使用 dante+stunnel+clash 的方式来代理其它 app 科学上网。

\n

# 准备

\n

一台可以访问外网的服务器,如香港的云主机并安装 Ubuntu 系统。

\n
# dante 部分
\n\n

1
apt-get install -y dante-server

\n\n
\n

修改 dante 配置
\n 1、修改 /etc/danted.conf 文件,在末尾加入下面的配置
\n 2、如果不需要用户密码认证,则把 socksmethod 改为 none
\n3、可以为认证单独添加一个用户
\n sudo useradd -r -s /bin/false proxy
\nsudo passwd proxy
\n4、重启服务
\n systemctl restart danted
\n5、查看状态
\n systemctl status danted

\n
\n

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
logoutput: syslog
internal: 0.0.0.0 port = 1080
external: eth0

socksmethod: username
clientmethod: none

user.privileged: root
user.notprivileged: nobody

client pass {
from: 0.0.0.0/0 to: 0.0.0.0/0
}

socks pass {
from: 0.0.0.0/0 to: 0.0.0.0/0
}

\n
# stunnel 服务端部分
\n\n

1
apt-get install -y stunnel

\n\n

1
openssl req -new -x509 -days 3650 -nodes -out stunnel.pem -keyout stunnel.pem

\n\n

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
; 设置stunnel的pid文件路径
pid = /etc/stunnel/stunnel.pid
; 设置stunnel工作的用户(组)
setuid = root
setgid = root

; 开启日志等级:emerg (0), alert (1), crit (2), err (3), warning (4), notice (5), info (6), or debug (7)
debug = 7
; 日志文件路径
output = /etc/stunnel/stunnel.log

; 证书文件
cert = /etc/stunnel/stunnel.pem
; 私钥文件
key = /etc/stunnel/stunnel.pem

; 自定义服务名danted
[danted]
; 服务监听的端口,client要连接这个端口与server通信
accept = 1081
; 服务要连接的端口,连接到danted的1080端口,将数据发给danted
connect = 1080

\n\n
# stunnel 客户端部分
\n
\n

可以安装在要代理的机器上,在需要代理的情况下再开启(代理地址填 127.0.0.1 加客户端监听端口)。也可以安装在国内的服务器上一直保持连接(代理信息填国内服务器 ip 加客户端监听端口)。本示例客户端为 ubuntu 系统

\n
\n\n

1
apt-get install -y stunnel

\n\n

1
2
3
4
5
6
7
8
9
10
[danted]
; 监听端口,用户的代理设置就是 stunnel-client-ip:1080
accept = 1080
; 要连接到的stunnel server的ip与端口
connect = stunnel服务端ip:1081
client = yes
; 需要验证对方发过来的证书
;verify = 2
; 用来进行证书验证的文件
;CAfile = /etc/stunnel/stunnel-server.pem

\n
# Clash 部分
\n\n

新建订阅
\n\"\"

\n

编辑文件并写入以下配置
\n\"\"

\n

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
# Profile Template for clash verge

proxies:
- name: "PROXY"
type: socks5 # 节点类型
server: "" # SOCKS5 服务器地址
port: 1080 # 服务器端口
username: "proxy" # 可选,用户名
password: "" # 可选,密码

rule-providers:
reject:
type: http
behavior: domain
url: "https://cdn.jsdelivr.net/gh/Loyalsoldier/clash-rules@release/reject.txt"
path: ./ruleset/reject.yaml
interval: 86400

icloud:
type: http
behavior: domain
url: "https://cdn.jsdelivr.net/gh/Loyalsoldier/clash-rules@release/icloud.txt"
path: ./ruleset/icloud.yaml
interval: 86400

apple:
type: http
behavior: domain
url: "https://cdn.jsdelivr.net/gh/Loyalsoldier/clash-rules@release/apple.txt"
path: ./ruleset/apple.yaml
interval: 86400

google:
type: http
behavior: domain
url: "https://cdn.jsdelivr.net/gh/Loyalsoldier/clash-rules@release/google.txt"
path: ./ruleset/google.yaml
interval: 86400

proxy:
type: http
behavior: domain
url: "https://cdn.jsdelivr.net/gh/Loyalsoldier/clash-rules@release/proxy.txt"
path: ./ruleset/proxy.yaml
interval: 86400

direct:
type: http
behavior: domain
url: "https://cdn.jsdelivr.net/gh/Loyalsoldier/clash-rules@release/direct.txt"
path: ./ruleset/direct.yaml
interval: 86400

private:
type: http
behavior: domain
url: "https://cdn.jsdelivr.net/gh/Loyalsoldier/clash-rules@release/private.txt"
path: ./ruleset/private.yaml
interval: 86400

gfw:
type: http
behavior: domain
url: "https://cdn.jsdelivr.net/gh/Loyalsoldier/clash-rules@release/gfw.txt"
path: ./ruleset/gfw.yaml
interval: 86400

tld-not-cn:
type: http
behavior: domain
url: "https://cdn.jsdelivr.net/gh/Loyalsoldier/clash-rules@release/tld-not-cn.txt"
path: ./ruleset/tld-not-cn.yaml
interval: 86400

telegramcidr:
type: http
behavior: ipcidr
url: "https://cdn.jsdelivr.net/gh/Loyalsoldier/clash-rules@release/telegramcidr.txt"
path: ./ruleset/telegramcidr.yaml
interval: 86400

cncidr:
type: http
behavior: ipcidr
url: "https://cdn.jsdelivr.net/gh/Loyalsoldier/clash-rules@release/cncidr.txt"
path: ./ruleset/cncidr.yaml
interval: 86400

lancidr:
type: http
behavior: ipcidr
url: "https://cdn.jsdelivr.net/gh/Loyalsoldier/clash-rules@release/lancidr.txt"
path: ./ruleset/lancidr.yaml
interval: 86400

applications:
type: http
behavior: classical
url: "https://cdn.jsdelivr.net/gh/Loyalsoldier/clash-rules@release/applications.txt"
path: ./ruleset/applications.yaml
interval: 86400

rules:
- RULE-SET,applications,DIRECT
- DOMAIN,clash.razord.top,DIRECT
- DOMAIN,yacd.haishan.me,DIRECT
- RULE-SET,private,DIRECT
- RULE-SET,reject,REJECT
- RULE-SET,icloud,DIRECT
- RULE-SET,apple,DIRECT
- RULE-SET,google,PROXY
- RULE-SET,proxy,PROXY
- RULE-SET,direct,DIRECT
- RULE-SET,lancidr,DIRECT
- RULE-SET,cncidr,DIRECT
- RULE-SET,telegramcidr,PROXY
- GEOIP,LAN,DIRECT
- GEOIP,CN,DIRECT
- MATCH,PROXY

\n 开启代理和自启动
\n\"\"

\n
\n

至次运行 clash 的电脑就可以愉快的科学上网了,并且还可以代理同局域网的其它电脑科学上网。

\n
\n", "tags": [ "生活", "技术分享", "stunnel", "科学上网", "dante", "clash", "socket" ] }, { "id": "https://hitoli.com/2024/08/21/%E5%88%A9%E7%94%A8jackson%E5%AF%B9%E8%BF%94%E5%9B%9E%E6%95%B0%E6%8D%AE%E5%81%9A%E5%AD%97%E5%85%B8%E8%BD%AC%E6%8D%A2/", "url": "https://hitoli.com/2024/08/21/%E5%88%A9%E7%94%A8jackson%E5%AF%B9%E8%BF%94%E5%9B%9E%E6%95%B0%E6%8D%AE%E5%81%9A%E5%AD%97%E5%85%B8%E8%BD%AC%E6%8D%A2/", "title": "利用jackson对返回数据做字典转换", "date_published": "2024-08-21T02:00:00.000Z", "content_html": "

# 简介

\n
\n

在项目中经常有一些类型字段用数字或者字母保存,但是前端展示时,则需要转换为用户能够理解的文字。于是公司内有人写了一个公用的注解,利用 java 的反射机制,修改字段的值或者添加属性来实现字典转换。由于 java 是静态类型语言,类在编译时固定,所以无法动态添加属性。于是他的解决方法是先把对象转为 map 再添加属性,或者在定义对象时先预定义字典的属于名称。我在使用中发现如果转换为 map(也可能是拦截处理的时机不对)则会丢失字段上的其它注解,如 swagger 文档定义、时间格式化等等。预设字典属性名称毕竟又增加了一道工序,操作起来也比较繁琐。于是我就想到了在返回阶段利用 jackson 的序列化步骤来对属性进行字典转换。json 对象类似 map 都是键值对,不用担心增减属性的问题。序列化时其它注解也已经处理完成,不会造成影响。

\n
\n

# 实现细节

\n
# 需要的 jar
\n

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
<!-- 版本根据需求调整 -->
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-core</artifactId>
<version>2.15.2</version>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-databind</artifactId>
<version>2.15.2</version>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-annotations</artifactId>
<version>2.15.2</version>
</dependency>

\n
# 定义注解
\n

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
public enum DicHandleStrategy {
replace,
add;

private DicHandleStrategy() {
}
}

import com.fasterxml.jackson.annotation.JacksonAnnotationsInside;
import com.fasterxml.jackson.databind.annotation.JsonSerialize;

import java.lang.annotation.ElementType;
import java.lang.annotation.Retention;
import java.lang.annotation.RetentionPolicy;
import java.lang.annotation.Target;

@Target({ElementType.FIELD, ElementType.METHOD})
@Retention(RetentionPolicy.RUNTIME)
@JacksonAnnotationsInside
@JsonSerialize(using = JsonDicHandle.class)
public @interface JsonDicField {

/**
* 业务表名
* @return
*/
String tableName() default "";

/**
* 业务字段名
* @return
*/
String fieldName() default "";

/**
* 默认增加一个对应属性名加_dic的新属性(如type_dic)* newFieldName为空的情况下
* @return
*/
DicHandleStrategy strategy() default DicHandleStrategy.add;

/**
* 字典属性名称(如不指定,并且strategy为add的情况下则字典名称默认为原属性加_dic)
* @return
*/
String newFieldName() default "";

}

\n
# 注解处理
\n

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
import org.springframework.beans.BeansException;
import org.springframework.beans.factory.NoSuchBeanDefinitionException;
import org.springframework.context.ApplicationContext;
import org.springframework.context.ApplicationContextAware;

import java.util.Timer;
import java.util.TimerTask;

public class SpringContextUtil implements ApplicationContextAware {
\tprivate static ApplicationContext applicationContext;

\tpublic void setApplicationContext(ApplicationContext applicationContext) throws BeansException {
\t\tSpringContextUtil.applicationContext = applicationContext;
\t}

\tpublic static ApplicationContext getApplicationContext() {
\t\treturn applicationContext;
\t}

\tpublic static Object getBean(String name) throws BeansException {
\t\treturn applicationContext.getBean(name);
\t}

\tpublic static Object getBean(String name, Class<?> requiredType) throws BeansException {
\t\treturn applicationContext.getBean(name, requiredType);
\t}

\tpublic static Object getBean(Class<?> requiredType) throws BeansException {
\t\treturn applicationContext.getBean(requiredType);
\t}

\tpublic static boolean containsBean(String name) {
\t\treturn applicationContext.containsBean(name);
\t}

\tpublic static boolean isSingleton(String name) throws NoSuchBeanDefinitionException {
\t\treturn applicationContext.isSingleton(name);
\t}

\tpublic static Class<?> getType(String name) throws NoSuchBeanDefinitionException {
\t\treturn applicationContext.getType(name);
\t}

\tpublic static String[] getAliases(String name) throws NoSuchBeanDefinitionException {
\t\treturn applicationContext.getAliases(name);
\t}
}


import com.fasterxml.jackson.core.JsonGenerator;
import com.fasterxml.jackson.databind.BeanProperty;
import com.fasterxml.jackson.databind.JsonMappingException;
import com.fasterxml.jackson.databind.JsonSerializer;
import com.fasterxml.jackson.databind.SerializerProvider;
import com.fasterxml.jackson.databind.ser.ContextualSerializer;
import com.ys.szygl.util.SpringContextUtil;
import org.apache.commons.collections.CollectionUtils;
import org.apache.commons.collections.MapUtils;
import org.apache.commons.lang3.StringUtils;

import java.io.IOException;
import java.util.*;
import java.util.stream.Collectors;

public class JsonDicHandle extends JsonSerializer<Object> implements ContextualSerializer {

final String DIC_FIELD_SUFFIX = "_dic";

String tableName = "";

String fieldName = "";

DicHandleStrategy strategy = DicHandleStrategy.add;

String newFieldName = "";

String propertyName = "";

static final IDicService iDicService = (IDicService)SpringContextUtil.getBean(IDicService.class);

public JsonDicHandle() {
}

public JsonDicHandle(String tableName, String fieldName, DicHandleStrategy strategy, String newFieldName, String propertyName) {
this.tableName = tableName;
this.fieldName = fieldName;
this.strategy = strategy;
this.newFieldName = newFieldName;
this.propertyName = propertyName;
}

@Override
public void serialize(Object value, JsonGenerator gen, SerializerProvider serializers) throws IOException {
if (StringUtils.isBlank(this.newFieldName) && StringUtils.isNotBlank(this.propertyName)) {
this.newFieldName = this.propertyName + DIC_FIELD_SUFFIX;
}
String dicValue = null;
if (Objects.nonNull(value)) {
dicValue = this.getDicValue(this.tableName, this.fieldName, String.valueOf(value));
if (DicHandleStrategy.replace.name().equals(this.strategy.name())) {
gen.writeObject(dicValue);
} else {
gen.writeObject(value);
}
} else {
gen.writeObject(value);
}
if (DicHandleStrategy.add.name().equals(this.strategy.name()) && StringUtils.isNotBlank(this.newFieldName)) {
gen.writeStringField(this.newFieldName, dicValue);
}
}

@Override
public JsonSerializer<?> createContextual(SerializerProvider prov, BeanProperty property) throws JsonMappingException {
JsonDicField annotation = property.getAnnotation(JsonDicField.class);
if (Objects.nonNull(annotation)) {
return new JsonDicHandle(annotation.tableName(), annotation.fieldName(), annotation.strategy(),
annotation.newFieldName(), property.getName());
}
return new JsonDicHandle();
}

/**
* 把值转为字典值
* @param value
* @return
*/
private String getDicValue(String tableName, String fieldName, String value) {
String dicValue = null;
if (Objects.nonNull(iDicService)) {
Map<String, String> dicMap = this.iDicService.getDicMapFromCatch(tableName, fieldName);
if (MapUtils.isNotEmpty(dicMap)) {
List<String> dicValues = Arrays.stream(value.split(",")).map(s -> {
return Optional.ofNullable(dicMap.get(s)).map(String::valueOf).orElse("");
}).collect(Collectors.toList());
if (CollectionUtils.isNotEmpty(dicValues)) {
dicValue = String.join(",", dicValues);
}
}
}
return dicValue;
}

}

\n
# 其它
\n
\n

IDicService 是自己的字典缓存服务,利用 tableName 和 fieldName 来定位属性的对应字典名称。请根据自己的业务逻辑进行修改。

\n
\n", "tags": [ "工作", "解决问题", "jackson", "字典转换" ] }, { "id": "https://hitoli.com/2024/06/25/idea-%E4%B8%BB%E9%A2%98-%E4%BB%A3%E7%A0%81%E9%A2%9C%E8%89%B2-%E4%BB%A3%E7%A0%81%E5%8C%BA%E8%83%8C%E6%99%AF-%E8%A1%8C%E5%8F%B7%E8%83%8C%E6%99%AF-%E6%B3%A8%E9%87%8A%E9%A2%9C%E8%89%B2%E4%BF%AE%E6%94%B9/", "url": "https://hitoli.com/2024/06/25/idea-%E4%B8%BB%E9%A2%98-%E4%BB%A3%E7%A0%81%E9%A2%9C%E8%89%B2-%E4%BB%A3%E7%A0%81%E5%8C%BA%E8%83%8C%E6%99%AF-%E8%A1%8C%E5%8F%B7%E8%83%8C%E6%99%AF-%E6%B3%A8%E9%87%8A%E9%A2%9C%E8%89%B2%E4%BF%AE%E6%94%B9/", "title": "idea 主题 代码颜色 代码区背景 行号背景 注释颜色修改", "date_published": "2024-06-25T08:03:00.000Z", "content_html": "

# 简介

\n
\n

最近写代码眼睛总是看的不舒服,想着换一个主题,但是换了主题,代码的颜色显示和之前又不一样了,接下来就是修改主题,但是代码颜色仍然保持 Darcula 主题的颜色。

\n
\n

# 修改主题

\n

我这里用的是 One Dark theme 可以直接去下载这个插件
\n\"\"

\n

# 修改代码颜色与背景

\n
    \n
  1. 代码颜色
    \n这里只修改代码颜色的话,idea 整体背景将不统一,看着非常难受,因此不仅要修改代码颜色,还要修改与当前主题相同的背景颜色。
    \n\"\"
    \n这里直接修改为 Darcula ,那么整体的代码颜色风格都会变成 Darcula,但是代码区域的背景颜色也会变成 Darcula ,使得 idea 背景一体性破坏。
  2. \n
  3. 代码区背景
    \n\"\"
    \n这里如果你用的跟我一样的 One Dark theme ,那么你就可以修改颜色为 21252B 这样代码区颜色就会和主题相一致,但是到这里你会发现,行号区域的颜色又不一样了。
  4. \n
\n

# 修改行号背景

\n

\"\"
\n这里按照相同的方法修改,即可达到整体的一致性。

\n

# 修改注释颜色

\n

这里就是个人习惯了,我习惯注释都是绿色的,清晰明了。
\n\"\"
\n这里提供一个参考 629755 ,我个人比较喜欢这个颜色。

\n

# 鼠标悬停代码提示框背景修改

\n

\"\"
\n\"\"

\n

# 代码快捷提示框背景颜色修改

\n

\"\"
\n\"\"

\n", "tags": [ "工作", "IDE", "IDE", "intellIJ" ] }, { "id": "https://hitoli.com/2024/05/22/%E5%AF%B9XML%E6%A0%BC%E5%BC%8F%E7%9A%84Word%E6%A8%A1%E6%9D%BF%E6%A0%BC%E5%BC%8F%E5%8C%96%E5%A4%84%E7%90%86/", "url": "https://hitoli.com/2024/05/22/%E5%AF%B9XML%E6%A0%BC%E5%BC%8F%E7%9A%84Word%E6%A8%A1%E6%9D%BF%E6%A0%BC%E5%BC%8F%E5%8C%96%E5%A4%84%E7%90%86/", "title": "对XML格式的Word模板格式化处理", "date_published": "2024-05-22T07:43:00.000Z", "content_html": "

# 简介

\n
\n

生成复杂的 word 文档需要使用 xml 格式的 word 模板,但是另存为 xml 文件的 word 文件格式比较杂乱。现提供一个格式化 xml 的工具类,代码如下:

\n
\n

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
package xxx.util;

import cn.hutool.core.io.FileUtil;
import cn.hutool.core.io.file.FileAppender;
import cn.hutool.core.io.file.FileReader;

import java.util.List;

public class WordConvertXmlHandle {

public static void main(String[] args) {
//文件读取-FileReader
//默认UTF-8编码,可以在构造中传入第二个参数作为编码
FileReader fileReader = new FileReader("D:\\\\mb.xml");
//从文件中读取每一行数据
List<String> strings = fileReader.readLines();
//文件追加-FileAppender
//destFile – 目标文件
//capacity – 当行数积累多少条时刷入到文件
//isNewLineMode – 追加内容是否为新行
FileAppender appender = new FileAppender(FileUtil.newFile("D:\\\\mb.ftl"), 16, true);
//遍历得到每一行数据
for (String string : strings) {
//判断每一行数据中不包含'$'的数据先添加进新文件
if (!string.contains("$")) {
appender.append(string);
continue;
}
//如果一行数据中包含'$'变量符将替换为'#$'
string = string.replaceAll("\\\\$", "#\\\\$");
//然后以'#'切割成每一行(数组),这样一来'$'都将在每一行的开头
String[] ss = string.split("#");
// 同一行的数据写到同一行,文件追加自动换行了(最后的完整数据)
StringBuilder sb = new StringBuilder();
//遍历每一行(数组ss)
for (int i = 0; i < ss.length; i++) {
//暂存数据
String s1 = ss[i];
//将不是以'$'开头的行数据放进StringBuilder
if (!s1.startsWith("$")) {
sb.append(s1);
continue;
}
//被分离的数据一般都是'${'这样被分开
//匹配以'$'开头的变量找到'}' 得到索引位置
int i1 = s1.lastIndexOf("}");
//先切割得到这个完整体
String substr = s1.substring(0, i1 + 1);
//把变量追加到StringBuilder
sb.append(substr.replaceAll("<[^>]+>", ""));
//再将标签数据追加到StringBuilder包裹变量
sb.append(s1.substring(i1 + 1));
}
appender.append(sb.toString());
}
appender.flush();
appender.toString();
}
}

\n", "tags": [ "工作", "解决问题", "xml", "word" ] }, { "id": "https://hitoli.com/2024/05/10/MySQL%E8%A1%A8%E5%88%86%E5%8C%BA/", "url": "https://hitoli.com/2024/05/10/MySQL%E8%A1%A8%E5%88%86%E5%8C%BA/", "title": "MySQL表分区", "date_published": "2024-05-10T09:09:00.000Z", "content_html": "

# 简介

\n
\n

当单表数据量过大时,就需要考虑对表进行分表或者分区了。分表和分区都是用来解决数据库中大量数据存储和查询效率的问题,但它们的实现方式和解决的问题有所不同。

\n
\n
\n

分表(Sharding):

\n
\n\n
\n

分区(Partitioning):

\n
\n\n
\n

区别:

\n
\n\n

# 实现细节

\n
\n

本文章介绍的是如何对单表进行分区。

\n
\n
# 给表添加分区
\n

1
2
3
4
5
6
7
8
ALTER TABLE 表名(字段)
PARTITION BY RANGE COLUMNS (时间字段名) (
\t-- 分区条件(时间小于2022-02-01的数据放入p202201分区中)
PARTITION p202201 VALUES LESS THAN ('2022-02-01'),
PARTITION p202202 VALUES LESS THAN ('2022-03-01'),
PARTITION p202203 VALUES LESS THAN ('2022-04-01'),
-- 继续定义更多的分区...
);

\n
# 按指定表名创建当前年月的分区
\n

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
CREATE PROCEDURE create_monthly_partition(IN tableName VARCHAR(255))
BEGIN
DECLARE currentYear INT;
DECLARE currentMonth INT;
\tDECLARE nextYear INT;
DECLARE nextMonth INT;
DECLARE partitionName VARCHAR(255);
SET currentYear = YEAR(CURRENT_DATE);
SET currentMonth = MONTH(CURRENT_DATE);
\t-- 计算下一个月的年份和月份
IF currentMonth = 12 THEN
SET nextYear = currentYear + 1;
SET nextMonth = 1;
ELSE
SET nextYear = currentYear;
SET nextMonth = currentMonth + 1;
END IF;
SET partitionName = CONCAT('p', currentYear, LPAD(currentMonth, 2, 0));
SET @sql = CONCAT('ALTER TABLE ', tableName,
' ADD PARTITION (PARTITION ', partitionName,
' VALUES LESS THAN (\\'', nextYear, '-', LPAD(nextMonth, 2, 0), '-01\\'', '))');
PREPARE stmt FROM @sql;
EXECUTE stmt;
DEALLOCATE PREPARE stmt;
END

\n
# 调用创建新分区
\n

1
CALL create_monthly_partition('表名');

\n
# 删除指定分区
\n

1
ALTER TABLE 表名 drop PARTITION 分区名;

\n
# 每月执行一次调用指定表添加新分区
\n

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
CREATE EVENT create_monthly_partition_event
ON SCHEDULE EVERY 1 MONTH
STARTS '2024-01-01 00:00:00.000'
ON COMPLETION NOT PRESERVE
ENABLE
DO begin
\tDECLARE CONTINUE HANDLER FOR SQLEXCEPTION
BEGIN
-- 捕获异常,记录错误信息,然后继续执行
SET @error_message = CONCAT('Error occurred while processing table: ', @tableName);
-- 可以将错误信息插入到日志表,或者使用 SELECT 输出错误信息
SELECT @error_message;
END;
-- 定义需要分区的表名列表
   SET @tables = '表名1,表名2';
-- 遍历表名列表并为每个表创建分区
WHILE CHAR_LENGTH(@tables) > 0 DO
SET @tableName = SUBSTRING_INDEX(@tables, ',', 1);
SET @tables = SUBSTRING(@tables, CHAR_LENGTH(@tableName) + 2);
CALL create_monthly_partition(@tableName);
END WHILE;
end

\n", "tags": [ "工作", "解决问题", "mysql", "表分区" ] }, { "id": "https://hitoli.com/2024/04/30/Java%E5%90%8E%E7%AB%AF%E7%A6%81%E6%AD%A2%E6%8E%A5%E5%8F%A3%E7%9E%AC%E6%97%B6%E9%87%8D%E5%A4%8D%E8%B0%83%E7%94%A8/", "url": "https://hitoli.com/2024/04/30/Java%E5%90%8E%E7%AB%AF%E7%A6%81%E6%AD%A2%E6%8E%A5%E5%8F%A3%E7%9E%AC%E6%97%B6%E9%87%8D%E5%A4%8D%E8%B0%83%E7%94%A8/", "title": "Java后端禁止接口瞬时重复调用", "date_published": "2024-04-30T03:44:00.000Z", "content_html": "

# 简介

\n

由于前端会莫名其妙的对同一接口请求多次,从而占用后端资源造成浪费。所以采用了后端拦截相关重复请求的方案。此方案会将请求用户 id 加接口 url 加参数作为 key,请求时间作为 value,使用 ConcurrentHashMap 进行缓存。如果下次相同的请求和上次请求的时间在指定的范围内则认为此请求属于重复请求。

\n

# 实现细节

\n
# 自定义可重复读 Request
\n

request 的 body 只能读取一次,所以对其进行封装。
\n

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
package xxx.support;

import com.alibaba.fastjson.JSON;
import lombok.extern.slf4j.Slf4j;

import javax.servlet.ReadListener;
import javax.servlet.ServletInputStream;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletRequestWrapper;
import java.io.*;
import java.nio.charset.StandardCharsets;
import java.util.Enumeration;
import java.util.Map;
import java.util.Objects;
import java.util.TreeMap;

@Slf4j
public class RepeatableReadHttpServletRequestWrapper extends HttpServletRequestWrapper {

private final byte[] requestBody;

public RepeatableReadHttpServletRequestWrapper(HttpServletRequest request) throws IOException {
super(request);
this.requestBody = readRequestBody(request);
}

private byte[] readRequestBody(HttpServletRequest request) throws IOException {
try (InputStream inputStream = request.getInputStream();
ByteArrayOutputStream result = new ByteArrayOutputStream()) {

byte[] buffer = new byte[1024];
int length;
while ((length = inputStream.read(buffer)) != -1) {
result.write(buffer, 0, length);
}

return result.toByteArray();
}
}

@Override
public ServletInputStream getInputStream() throws IOException {
// 直接使用 ByteArrayInputStream,它提供可重复读取的输入流
return new ServletInputStream() {
private final ByteArrayInputStream byteArrayInputStream = new ByteArrayInputStream(requestBody);

@Override
public int read() throws IOException {
return byteArrayInputStream.read();
}

@Override
public boolean isFinished() {
return byteArrayInputStream.available() == 0;
}

@Override
public boolean isReady() {
return true;
}

@Override
public void setReadListener(ReadListener readListener) {
// 不需要实现,可以留空
}
};
}

@Override
public BufferedReader getReader() throws IOException {
// 使用 InputStreamReader 包装 ByteArrayInputStream,提供可重复读取的字符流
return new BufferedReader(new InputStreamReader(new ByteArrayInputStream(requestBody)));
}

/**
* 获取json格式的参数
* @return
*/
public String getParamsToJSONString() {
String jsonStr = "";
if ("POST".equals(this.getMethod().toUpperCase()) && this.isJsonRequest()) {
try {
jsonStr = this.readJsonData();
} catch (Exception e) {
log.error(e.getMessage());
}
} else {
Enumeration<String> parameterNames = this.getParameterNames();
if (Objects.nonNull(parameterNames) && parameterNames.hasMoreElements()) {
// 将参数排序后转为json
Map<String, String> paramsMap = new TreeMap<>();
while (parameterNames.hasMoreElements()) {
String paramName = parameterNames.nextElement();
paramsMap.put(paramName, this.getParameter(paramName));
}
jsonStr = JSON.toJSONString(paramsMap);
}
}
return jsonStr;
}

/**
* 判断是否json请求
* @return
*/
private boolean isJsonRequest() {
String contentType = this.getContentType();
return contentType != null && contentType.toLowerCase().contains("application/json");
}

/**
* 获取json格式的参数
* @return
* @throws IOException
*/
private String readJsonData() throws IOException {
return new String(this.readRequestBody(this), StandardCharsets.UTF_8);
}

}

\n
# 重复请求过滤器
\n

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
package xxx.filter;

import cn.hutool.core.collection.CollectionUtil;
import xxx.RepeatableReadHttpServletRequestWrapper;
import org.springframework.boot.actuate.endpoint.web.WebEndpointResponse;
import org.springframework.security.web.util.matcher.RequestMatcher;
import org.springframework.web.filter.OncePerRequestFilter;

import javax.servlet.FilterChain;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import java.io.IOException;
import java.util.List;
import java.util.Map;
import java.util.Objects;
import java.util.concurrent.ConcurrentHashMap;

public class DuplicateRequestFilter extends OncePerRequestFilter {
   // 是否启用
   private Boolean duplicateRequestFilter;
   // 间隔时间(毫秒)
   private Long intervalTime;
   // 清除缓存时间(毫秒)
private Long clearCachetime;
   // 放行url
   private List<RequestMatcher> permitAll;

public DuplicateRequestFilter(Boolean duplicateRequestFilter, List<RequestMatcher> permitAll, Long intervalTime,
Long clearCachetime) {
this.duplicateRequestFilter = duplicateRequestFilter;
this.permitAll = permitAll;
this.intervalTime = intervalTime;
this.clearCachetime = clearCachetime;
}

// 存储参数和请求时间
private Map<String, Long> requestCache = new ConcurrentHashMap<>();

@Override
protected void doFilterInternal(HttpServletRequest request, HttpServletResponse response, FilterChain filterChain)
throws ServletException, IOException {
boolean doFilter = true;
// 使用 ContentCachingRequestWrapper 包装原始请求
RepeatableReadHttpServletRequestWrapper wrappedRequest = null;
if (this.duplicateRequestFilter) {
// 判断请求路径是否需要放行
boolean permit = false;
if (CollectionUtil.isNotEmpty(this.permitAll)) {
for (RequestMatcher matcher: this.permitAll) {
if (matcher.matches(request)) {
permit = true;
break;
}
}
}
if (!permit) {
if (request instanceof RepeatableReadHttpServletRequestWrapper) {
wrappedRequest = (RepeatableReadHttpServletRequestWrapper) request;
} else {
wrappedRequest = new RepeatableReadHttpServletRequestWrapper(request);
}
doFilter = this.isValid(wrappedRequest);
}
}
if (doFilter) {
// 继续处理请求
filterChain.doFilter(Objects.nonNull(wrappedRequest) ? wrappedRequest : request, response);
} else {
response.setContentType("application/json");
response.setStatus(WebEndpointResponse.STATUS_TOO_MANY_REQUESTS);
// response.setStatus(HttpServletResponse.SC_OK);
// ObjectMapper mapper = new ObjectMapper();
// mapper.writeValue(response.getOutputStream(), R.error(WebEndpointResponse.STATUS_TOO_MANY_REQUESTS, "重复的请求"));
}
}

/**
* 验证请求的有效性(判断是否重复请求)
* @param request
* @return
*/
private boolean isValid(RepeatableReadHttpServletRequestWrapper request) {
boolean valid = true;
// 缓存的key
String key = TokenUtil.getUidByToken() + "_" + request.getServletPath() + "_" + request.getParamsToJSONString();
// 获取之前的请求时间
Long previousRequestTime = requestCache.get(key);
if (previousRequestTime != null) {
// 如果距离上次请求时间很短(例如1秒),则拒绝当前请求
if (System.currentTimeMillis() - previousRequestTime < this.intervalTime) {
valid = false;
}
}
this.clearOldRequests();
// 缓存当前请求时间
requestCache.put(key, System.currentTimeMillis());
return valid;
}

// 用于清除缓存中的旧请求数据,防止缓存无限增长
private void clearOldRequests() {
requestCache.entrySet().removeIf(entry -> System.currentTimeMillis() - entry.getValue() > this.clearCachetime);
}

}

\n
# 配置 OAuth2 资源
\n

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
package xxx.config;


import xxx.AuthExceptionEntryPoint;
import xxx.CustomAccessDeniedHandler;
import xxx.DuplicateRequestFilter;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.security.config.annotation.web.builders.HttpSecurity;
import org.springframework.security.oauth2.config.annotation.web.configuration.EnableResourceServer;
import org.springframework.security.oauth2.config.annotation.web.configuration.ResourceServerConfigurerAdapter;
import org.springframework.security.oauth2.config.annotation.web.configurers.ResourceServerSecurityConfigurer;
import org.springframework.security.oauth2.provider.token.TokenStore;
import org.springframework.security.web.authentication.preauth.AbstractPreAuthenticatedProcessingFilter;
import org.springframework.security.web.util.matcher.AntPathRequestMatcher;

import javax.servlet.Filter;
import java.util.Arrays;
import java.util.stream.Collectors;

@Configuration
@EnableResourceServer
public class ResourceServerConfig extends ResourceServerConfigurerAdapter {

Logger log = LoggerFactory.getLogger(ResourceServerConfig.class);

@Autowired
private TokenStore tokenStore;

/**
* 是否开放所有接口
*/
@Value("${http.security.permitAll:false}")
private Boolean isPermitAll;

/**
* 是否启用重复请求过滤
*/
@Value("${request.duplicateFilter.enabled:true}")
private Boolean duplicateRequestFilter;

/**
* 间隔时间(毫秒)
*/
@Value("${request.duplicateFilter.interval_time:1000}")
private Long intervalTime;

/**
* 清除缓存时间(毫秒)
*/
@Value("${request.duplicateFilter.clear_cache_time:30000}")
private Long clearCachetime;

/**
* 不需要验证权限的接口
*/
private String[] permitAll = new String[] {
"/auth/getVCode", "/auth/login"
};


/**
    * 通行规则
* @param http
* @throws Exception
*/
@Override
public void configure(HttpSecurity http) throws Exception {
HttpSecurity httpSecurity = http.csrf().disable();
if (isPermitAll) {
httpSecurity.authorizeRequests().antMatchers("/**").permitAll();
} else {
httpSecurity.authorizeRequests()
.antMatchers(permitAll).permitAll()
.antMatchers("/**").authenticated();
}
if (this.duplicateRequestFilter) {
httpSecurity.addFilterAfter(duplicateRequestFilter(), AbstractPreAuthenticatedProcessingFilter.class);
}
//让X-frame-options失效,去除iframe限制
http.headers().frameOptions().disable();
}

@Override
public void configure(ResourceServerSecurityConfigurer resources) throws Exception {
resources.tokenStore(tokenStore).authenticationEntryPoint(new AuthExceptionEntryPoint())
.accessDeniedHandler(new CustomAccessDeniedHandler());

}

@Bean
public Filter duplicateRequestFilter() {
return new DuplicateRequestFilter(this.duplicateRequestFilter, Arrays.asList(this.permitAll)
.stream().map(AntPathRequestMatcher::new).collect(Collectors.toList()), this.intervalTime,
this.clearCachetime);
}

}

\n", "tags": [ "工作", "解决问题", "java", "429" ] }, { "id": "https://hitoli.com/2024/03/03/Docker%E5%AE%B9%E5%99%A8%E7%AE%A1%E7%90%86%E5%B9%B3%E5%8F%B0-Portainer%E5%AE%89%E8%A3%85/", "url": "https://hitoli.com/2024/03/03/Docker%E5%AE%B9%E5%99%A8%E7%AE%A1%E7%90%86%E5%B9%B3%E5%8F%B0-Portainer%E5%AE%89%E8%A3%85/", "title": "Docker容器管理平台-Portainer安装", "date_published": "2024-03-03T07:52:00.000Z", "content_html": "

# 简介

\n

Portainer 是一款开源的容器管理平台,它提供了易于使用的 Web UI 界面,用于管理和监控容器及容器集群。该软件支持多种容器技术和配置,包括但不限于 Docker、Kubernetes 和 Swarm。

\n

# 部署

\n

1
2
3
4
5
#原版
docker run -d --restart=always --name="portainer" -p 9000:9000 -v /var/run/docker.sock:/var/run/docker.sock -v D:\\docker\\portainer\\data:/data portainer/portainer-ce

#汉化版
docker run -d --restart=always --name="portainer" -p 9000:9000 -v /var/run/docker.sock:/var/run/docker.sock -v D:\\docker\\portainer\\data:/data 6053537/portainer-ce

\n

# 登录

\n

http://localhost:9000/#!/home
\n\"\"

\n", "tags": [ "Windows", "工具", "Docker", "Portainer" ] }, { "id": "https://hitoli.com/2024/01/19/IntellIJ%E5%8F%AA%E7%BC%96%E8%AF%91%E6%89%93%E5%8C%85%E6%8C%87%E5%AE%9A%E7%9A%84%E6%A8%A1%E5%9D%97/", "url": "https://hitoli.com/2024/01/19/IntellIJ%E5%8F%AA%E7%BC%96%E8%AF%91%E6%89%93%E5%8C%85%E6%8C%87%E5%AE%9A%E7%9A%84%E6%A8%A1%E5%9D%97/", "title": "IntellIJ只编译打包指定的模块", "date_published": "2024-01-19T08:06:00.000Z", "content_html": "

# 新增 Maven 配置

\n

IntellIJ -》 右侧小锤子旁下拉按钮选择 Edit Configurations -》+ 号按钮 -》Maven

\n

\"\"

\n

\"\"

\n

\"\"

\n

# 填写 Maven 命令

\n

Working directory 选择根目录,命令是基于选中的目录为执行目录,Run 填写以下命令

\n

1
2
3
4
5
6
7
8
9
10
11
12
clean package -pl emergency-dzdz/dzdz-yzt -am -Dmaven.test.skip=true -f pom.xml
或者
clean install -pl emergency-dzdz/dzdz-yzt -am -Dmaven.test.skip=true -f pom.xml

clean: 这个命令会删除目标目录(target),确保每次构建是干净的。
install: 这个命令会将项目打包,并安装到本地的 Maven 仓库中。
-pl emergency-dzdz/dzdz-yzt: -pl 参数指定要构建的项目模块。dzdz-yzt 是模块的名称。这个参数告诉 Maven 只构建 dzdz-yzt 及其依赖的模块。注意模块路径,此处是构建emergency-dzdz下的dzdz-yzt模块。
-am: -am 参数与 -pl 一起使用,表示还要构建 dzdz-yzt 模块所依赖的所有模块。
package: 这个命令会将项目打包为 JAR 文件或 WAR 文件,取决于项目类型。
-f pom.xml: 这个参数指定要使用的 pom.xml 文件。默认情况下,Maven 会在当前目录下查找 pom.xml 文件,但可以通过此参数指定一个不同的位置。
-DskipTests=true:跳过测试的执行,但会编译测试代码。
-Dmaven.test.skip=true:跳过测试的编译和执行。

\n", "tags": [ "工作", "解决问题", "IntellIJ", "Maven" ] }, { "id": "https://hitoli.com/2024/01/03/%E8%A7%A3%E5%86%B3Nginx%E8%AE%BF%E9%97%AE%E8%87%AA%E7%AD%BEssl%E8%AF%81%E4%B9%A6%E6%8A%A5%E4%B8%8D%E5%AE%89%E5%85%A8%E5%91%8A%E8%AD%A6/", "url": "https://hitoli.com/2024/01/03/%E8%A7%A3%E5%86%B3Nginx%E8%AE%BF%E9%97%AE%E8%87%AA%E7%AD%BEssl%E8%AF%81%E4%B9%A6%E6%8A%A5%E4%B8%8D%E5%AE%89%E5%85%A8%E5%91%8A%E8%AD%A6/", "title": "解决Nginx访问自签ssl证书报不安全告警", "date_published": "2024-01-03T10:01:00.000Z", "content_html": "

# 生成根证书私钥和根证书

\n

1
2
3
4
openssl req -x509 -nodes -days 36500 -newkey rsa:2048 -subj "/C=国家/ST=省/L=市/O=机构" -keyout CA-private.key -out CA-certificate.crt -reqexts v3_req -extensions v3_ca

#示例
openssl req -x509 -nodes -days 36500 -newkey rsa:2048 -subj "/C=CN/ST=EZ/L=EZ/O=EZ" -keyout CA-private.key -out CA-certificate.crt -reqexts v3_req -extensions v3_ca

\n

# 生成自签名证书私钥

\n

1
openssl genrsa -out private.key 2048

\n

# 根据自签名证书私钥生成自签名证书申请文件

\n

1
openssl req -new -key private.key -subj "/C=CN/ST=EZ/L=EZ/O=EZ/CN=192.168.2.117" -sha256 -out private.csr

\n

# 定义自签名证书扩展文件 (解决 chrome 安全告警),新建 private.ext 文件并写入以下内容(IP 为 nginx 服务器 ip,同 nginx.conf 中的 server_name)

\n

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[ req ]
default_bits = 1024
distinguished_name = req_distinguished_name
req_extensions = san
extensions = san
[ req_distinguished_name ]
countryName = CN
stateOrProvinceName = Definesys
localityName = Definesys
organizationName = Definesys
[SAN]
authorityKeyIdentifier=keyid,issuer
basicConstraints=CA:FALSE
keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment
subjectAltName = IP:192.168.2.117

\n

# 生成自签名证书(有效期 100 年)

\n

1
openssl x509 -req -days 36500 -in private.csr -CA CA-certificate.crt -CAkey CA-private.key -CAcreateserial -sha256 -out private.crt -extfile private.ext -extensions SAN

\n

# nginx 的 ssl 证书配置

\n

1
2
ssl_certificate_key  /usr/local/nginx/ssl/private.key;
ssl_certificate /usr/local/nginx/ssl/private.crt;

\n

# 证书安装

\n

需要安装 CA-certificate.crt 到受信任的根证书颁发机构下,即可从浏览器正常访问且不会报不安全警告。

\n

1
2
3
4
5
6
7
8
9
#ssl测试
openssl s_client -connect localhost:8080
#检查证书格式
openssl x509 -in private.crt -text -noout
openssl rsa -in private.key -check
#检查证书是否过期(确保 "notBefore" 小于当前日期,"notAfter" 大于当前日期)
openssl x509 -in private.crt -noout -dates
#查看证书链
openssl x509 -in private.crt -noout -issuer -subject

\n", "tags": [ "工作", "解决问题", "Nginx", "https", "SSL", "证书" ] }, { "id": "https://hitoli.com/2024/01/03/Centos7%E7%BC%96%E8%AF%91%E5%8D%87%E7%BA%A7nginx/", "url": "https://hitoli.com/2024/01/03/Centos7%E7%BC%96%E8%AF%91%E5%8D%87%E7%BA%A7nginx/", "title": "Centos7编译升级nginx", "date_published": "2024-01-03T09:40:00.000Z", "content_html": "

# 配置

\n

./configure
\n# 安装目录
\n --prefix=/usr/local/nginx
\n#nginx 运行时的非特权用户
\n --user=nginx
\n#nginx 运行时的非特权用户组
\n --group=nginx
\n#nginx 运行时 pid 的目录
\n --pid-path=/var/run/nginx/nginx.pid
\n# 锁定文件目录,防止误操作,或其他使用
\n --lock-path=/var/lock/nginx.lock
\n#nginx 错误日志目录
\n --error-log-path=/var/log/nginx/error.log
\n#nginx 运行日志目录
\n --http-log-path=/var/log/nginx/access.log
\n# 开启 gz 模块,压缩静态页面
\n --with-http_gzip_static_module
\n--with-http_gunzip_module
\n# 开启 ssl 模块
\n --with-http_ssl_module
\n# 开启 http2 模块
\n --with-http_v2_module
\n#openssl 目录
\n --with-openssl=/home/openssl-3.2.0
\n#nginx 的客户端状态
\n --with-http_stub_status_module
\n--with-http_realip_module
\n# 设定客户端请求的临时目录
\n --http-client-body-temp-path=/usr/local/nginx/client
\n# 设定 http 代理临时目录
\n --http-proxy-temp-path=/usr/local/nginx/proxy
\n# 设定 fastcgi 临时目录
\n --http-fastcgi-temp-path=/usr/local/nginx/fastcgi
\n# 设定 uwsgi 临时目录
\n --http-uwsgi-temp-path=/usr/local/nginx/uwsgi
\n# 设定 scgi 临时目录
\n --http-scgi-temp-path=/usr/local/nginx/scgi
\n

1
./configure --prefix=/usr/local/nginx --with-http_stub_status_module --with-http_ssl_module --with-http_v2_module  --with-openssl=/home/openssl-3.2.0

\n

# 编译

\n

1
make(不要make install)

\n

# 备份

\n

1
cp /usr/local/nginx/sbin/nginx /usr/local/nginx/sbin/nginx.old

\n

# 更新

\n

1
2
3
4
5
6
#关闭nginx
nginx -s stop
#更新nginx
cp /root/nginx-1.24.0/objs/nginx /usr/local/nginx/sbin/
#启动nginx
nginx

\n", "tags": [ "工作", "解决问题", "Nginx", "CentOS" ] }, { "id": "https://hitoli.com/2023/12/24/Docker-desktop%E9%83%A8%E7%BD%B2nacos/", "url": "https://hitoli.com/2023/12/24/Docker-desktop%E9%83%A8%E7%BD%B2nacos/", "title": "Docker desktop部署nacos", "date_published": "2023-12-24T13:37:00.000Z", "content_html": "

# 创建数据库

\n

1
create database nacos

\n

# 下载初始化脚本

\n

脚本文件

\n

# 以普通模式启动获取数据

\n

1
docker run -d --restart=always --name="nacos" -e MODE=standalone -p 8848:8848 -p 9848:9848 nacos/nacos-server:latest

\n

# 进入容器内修改文件权限

\n

1
2
3
chmod 777 /home/nacos/conf
chmod 777 /home/nacos/data
chmod 777 /home/nacos/logs

\n

# 拷贝文件到本地

\n

1
2
3
docker cp nacos:/home/nacos/conf D:\\docker\\nacos\\data\\
docker cp nacos:/home/nacos/data D:\\docker\\nacos\\data\\
docker cp nacos:/home/nacos/logs D:\\docker\\nacos\\data\\

\n

# 创建正式容器

\n

1
docker run -d --name nacos --restart=always --network my-net -p 8848:8848 -p 9848:9848 -p 9849:9849 -e MODE=standalone --privileged=true -e SPRING_DATASOURCE_PLATFORM=mysql -e MYSQL_SERVICE_HOST=mysql地址 -e MYSQL_SERVICE_PORT=mysql端口 -e MYSQL_SERVICE_USER=mysql用户名 -e MYSQL_SERVICE_PASSWORD=mysql密码 -e MYSQL_SERVICE_DB_NAME=nacos -e TIME_ZONE='Asia/Shanghai' -v D:\\docker\\nacos\\data\\logs:/home/nacos/logs -v D:\\docker\\nacos\\data\\data:/home/nacos/data -v D:\\docker\\nacos\\data\\conf:/home/nacos/conf nacos/nacos-server:latest

\n", "tags": [ "Windows", "工具", "docker", "nacos" ] }, { "id": "https://hitoli.com/2023/12/02/fastjson%E5%BA%8F%E5%88%97%E5%8C%96%E5%8E%BB%E9%99%A4%E7%A9%BA%E5%AD%97%E7%AC%A6%E4%B8%B2/", "url": "https://hitoli.com/2023/12/02/fastjson%E5%BA%8F%E5%88%97%E5%8C%96%E5%8E%BB%E9%99%A4%E7%A9%BA%E5%AD%97%E7%AC%A6%E4%B8%B2/", "title": "fastjson序列化去除空字符串属性", "date_published": "2023-12-02T13:47:00.000Z", "content_html": "

今天在把对象转为 json 时需要去除 key 或者 value 为 null 或空字符串的属性,特此记录一下后续好复用。
\n

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
public static String toJSONString(Object object) {
SerializerFeature[] serializerFeatures = new SerializerFeature[] {
//格式化时间
SerializerFeature.WriteDateUseDateFormat
};
return JSON.toJSONString(object, new ValueFilter() {
@Override
public Object process(Object object, String name, Object value) {
               // 如果名称或者值为null或空字符串,则不序列化该属性
if (name == null || (name instanceof String && ((String) name).isEmpty()) ||
value == null || (value instanceof String && ((String) value).isEmpty())) {
return null;
}
return value;
}
}, serializerFeatures);
}

\n", "tags": [ "工作", "解决问题", "fastjson" ] }, { "id": "https://hitoli.com/2023/11/03/Linux%E4%B8%8B%E5%BF%AB%E9%80%9F%E9%83%A8%E7%BD%B2SpringBoot%E9%A1%B9%E7%9B%AE%E7%9A%84%E8%84%9A%E6%9C%AC/", "url": "https://hitoli.com/2023/11/03/Linux%E4%B8%8B%E5%BF%AB%E9%80%9F%E9%83%A8%E7%BD%B2SpringBoot%E9%A1%B9%E7%9B%AE%E7%9A%84%E8%84%9A%E6%9C%AC/", "title": "Linux下快速部署SpringBoot项目的脚本", "date_published": "2023-11-03T13:35:00.000Z", "content_html": "

# Windows 部署脚本

\n

只需要把 jar 和 yml 跟脚本放在同一目录下即可快速启动。

\n
\n

拷贝以下代码放入 txt 文本,然后改为 start.sh

\n
\n

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
#!/bin/bash

export CLOUD_HOME=`pwd`

# 获取当前目录中的第一个JAR文件的名称
jar_file=$(find . -maxdepth 1 -type f -name "*.jar" | head -n 1)

if [ -n "$jar_file" ]; then
jar_file=${jar_file#./}
#echo "JAR文件的名称是: $jar_file"
jar_file_name=$(basename "$jar_file" .jar)
else
echo "当前目录没有JAR文件."
exit
fi

# 获取当前目录中的第一个yml文件的名称
yml_file=$(find . -maxdepth 1 -type f -name "*.yml" | head -n 1)

if [ -n "$yml_file" ]; then
yml_file=${yml_file#./}
#echo "YML文件的名称是: $yml_file"
else
echo "当前目录中没有YML文件."
fi

pids=$(ps -ef | grep java | grep $jar_file_name | grep -v grep | awk '{print $2}')

for pid in $pids; do
echo "$jar_file_name is running, pid="$pid
exit 0
done

echo "$jar_file_name is pedding..."

sleep 3

JAVA_OPTS="-Djava.security.egd=file:/dev/./urandom -Dfile.encoding=UTF8"
JAVA_OPTS="$JAVA_OPTS -Dsun.jnu.encoding=UTF8 -Xms512m -Xmx1024m"
JAVA_OPTS="$JAVA_OPTS -Dpid.path=$CLOUD_HOME/temp -Dspring.config.additional-location=$CLOUD_HOME/$yml_file"
JAVA_OPTS="$JAVA_OPTS -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=5007"

nohup java $JAVA_OPTS -jar $CLOUD_HOME/$jar_file >/dev/null 2> $CLOUD_HOME/$jar_file_name.run &
#nohup java $JAVA_OPTS -jar $CLOUD_HOME/$jar_file > $CLOUD_HOME/$jar_file_name.run 2>&1 &

echo "$jar_file_name started."

\n
\n

拷贝以下代码放入 txt 文本,然后改为 stop.sh

\n
\n

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
#!/bin/bash

export CLOUD_HOME=`pwd`

# 获取当前目录中的第一个JAR文件的名称
jar_file=$(find . -maxdepth 1 -type f -name "*.jar" | head -n 1)

if [ -n "$jar_file" ]; then
jar_file=${jar_file#./}
#echo "JAR文件的名称是: $jar_file"
jar_file_name=$(basename "$jar_file" .jar)
else
echo "当前目录没有JAR文件."
exit
fi

# 获取当前目录中的第一个yml文件的名称
yml_file=$(find . -maxdepth 1 -type f -name "*.yml" | head -n 1)

if [ -n "$yml_file" ]; then
yml_file=${yml_file#./}
#echo "YML文件的名称是: $yml_file"
else
echo "当前目录中没有YML文件."
fi

pids=$(ps -ef | grep java | grep $jar_file_name | grep -v grep | awk '{print $2}')

for pid in $pids; do
kill -9 $pid
done

echo "$jar_file_name is stopping..."

sleep 5

echo "$jar_file_name stopped."

\n", "tags": [ "Linux", "Shell", "Linux", "SpringBoot", "快速部署", "bash" ] }, { "id": "https://hitoli.com/2023/10/29/%E7%BB%99%E6%88%91%E7%9A%84%E8%80%81%E7%AC%94%E8%AE%B0%E6%9C%AC%E6%B8%85%E7%90%86%E7%81%B0%E5%B0%98/", "url": "https://hitoli.com/2023/10/29/%E7%BB%99%E6%88%91%E7%9A%84%E8%80%81%E7%AC%94%E8%AE%B0%E6%9C%AC%E6%B8%85%E7%90%86%E7%81%B0%E5%B0%98/", "title": "给我的老笔记本清理灰尘", "date_published": "2023-10-29T12:41:00.000Z", "content_html": "

今天闲着无事就把我的老笔记本拆了,清理了一下灰尘。笔记本已经 10 多年了,中间加过内存,换过固态硬盘。清理一下还能发挥它的余热!
\n\"\"
\n\"\"
\n\"\"
\n\"\"
\n\"\"
\n\"\"
\n\"\"

\n", "tags": [ "生活", "日常记录", "笔记本", "DELL" ] }, { "id": "https://hitoli.com/2023/10/28/Windows%E4%B8%8B%E5%BF%AB%E9%80%9F%E9%83%A8%E7%BD%B2SpringBoot%E9%A1%B9%E7%9B%AE%E7%9A%84%E6%89%B9%E5%A4%84%E7%90%86/", "url": "https://hitoli.com/2023/10/28/Windows%E4%B8%8B%E5%BF%AB%E9%80%9F%E9%83%A8%E7%BD%B2SpringBoot%E9%A1%B9%E7%9B%AE%E7%9A%84%E6%89%B9%E5%A4%84%E7%90%86/", "title": "Windows下快速部署SpringBoot项目的批处理", "date_published": "2023-10-28T11:56:00.000Z", "content_html": "

# Windows 部署脚本

\n

只需要把 jar 和 yml 跟批处理放在同一目录下即可点击快速启动。启动后再次点击会关闭上次启动的窗口并重新启动。

\n
\n

拷贝以下代码放入 txt 文本,然后改为 start.bat

\n
\n

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
@ECHO OFF
setlocal enabledelayedexpansion

REM 关闭上次进程
SET "pidFile=pid.txt"
if exist "%pidFile%" (
\tfor /f "usebackq" %%a in ("pid.txt") do (
\t\tset PID=%%a
\t)
\tif not "!PID!"=="" (
\t\ttaskkill /F /T /PID !pid!
\t\tdel pid.txt
\t)
)

REM 存储当前进程
for /f %%i in ('wmic process where "name='cmd.exe' and CommandLine like '%%<scriptname>.bat%%'" get ParentProcessId ^| findstr /r "[0-9]"') do set pid=%%i
echo %PID% > pid.txt

REM 设置title
for /f "tokens=2" %%i in ('chcp') do set codepage=%%i
chcp 65001 > nul
title 我的SpringBoot项目
chcp %codepage% > nul

cd %~dp0

REM 获取jar
set "jarFile="
for %%i in (*.jar) do (
if not defined jarFile (
set "jarFile=%%i"
)
)

if not defined jarFile (
echo not find jar
pause
exit
)

SET JAVA_OPTS=-Djava.security.egd=file:/dev/./urandom -Dfile.encoding=UTF-8
set JAVA_OPTS=%JAVA_OPTS% -Dsun.jnu.encoding=UTF8 -Xms512m -Xmx1024m
set JAVA_OPTS=%JAVA_OPTS% -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=5007
set JAVA_OPTS=%JAVA_OPTS% -Dpid.path=./temp

REM 获取yml
set "ymlFile="
for %%i in (*.yml) do (
if not defined ymlFile (
set "ymlFile=%%i"
)
)

if defined ymlFile (
\tset JAVA_OPTS=%JAVA_OPTS% -Dspring.config.additional-location=!ymlFile!
) else (
\techo not find yml
)

REM 启动服务
java %JAVA_OPTS% -jar !jarFile!
pause

\n
\n

拷贝以下代码放入 txt 文本,然后改为 stop.bat

\n
\n

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
@ECHO OFF
setlocal enabledelayedexpansion

REM 关闭上次进程
SET "pidFile=pid.txt"
if exist "%pidFile%" (
\tfor /f "usebackq" %%a in ("pid.txt") do (
\t\tset PID=%%a
\t)
\tif not "!PID!"=="" (
\t\ttaskkill /F /T /PID !pid!
\t\tdel pid.txt
\t)
)

exit

\n", "tags": [ "Windows", "工具", "SpringBoot", "快速部署", "Bat", "批处理" ] }, { "id": "https://hitoli.com/2023/10/28/Centos%E6%8C%82%E8%BD%BD%E6%96%B0%E7%A1%AC%E7%9B%98/", "url": "https://hitoli.com/2023/10/28/Centos%E6%8C%82%E8%BD%BD%E6%96%B0%E7%A1%AC%E7%9B%98/", "title": "Centos挂载新硬盘", "date_published": "2023-10-28T11:26:00.000Z", "content_html": "

# 查看磁盘信息

\n

1
fdisk -l 查看当前磁盘的分区情况

\n\"\"
\n 可从图中获取以下信息:
\n/dev/vdb 数据盘容量为 60GB,包含 MBR 分区 /dev/vdb1,容量为 50GB。
\n/dev/vdc 数据盘容量为 60GB,包含 GPT 分区 /dev/vdc1,容量为 50GB。

\n

1
df -TH 分区的文件系统类型

\n\"\"
\n 可从图中获取以下信息:
\n/dev/vdb1 文件系统类型为 ext4,已挂载至 /mnt/disk1。
\n/dev/vdc1 文件系统类型为 xfs,已挂载至 /mnt/disk2。
\n
1
fdisk /dev/vdb 查看新磁盘情况

\n\"\"
\n
1
lsbl 查看分区情况

\n\"\"

\n

# 挂载新硬盘

\n

1
mkfs.ext4 /dev/vdb 格式化磁盘

\n\"\"
\n
1
2
3
cd /mnt
mkdir data 新建挂载点
mount /dev/vdb /mnt/data 挂载

\n
1
df -h 查看挂载情况

\n\"\"
\n 查看 UUID 有三种方式:
\n
1
blkid

\n\"\"
\n
1
lsblk -f

\n\"\"
\n
1
ll /dev/disk/by-uuid/

\n\"\"
\n
1
2
3
4
设置自动挂载:
echo "UUID=c8ac09ca-fd4d-4511-bd2c-4fdf96f08168 /data ext4 defaults 0 0" >> /etc/fstab
自动挂载/etc/fstab里面的东西
mount -a

\n

# 临时卸载

\n

1
umount /dev/vdb 重启机器之后又恢复到挂载状态

\n

# 永久卸载

\n

1
vim /etc/fstab 把添加的磁盘信息删除即可。

\n", "tags": [ "Linux", "Centos", "Centos", "Linux", "Mount" ] }, { "id": "https://hitoli.com/2023/09/09/squid-stunnel-%E7%A7%91%E5%AD%A6%E4%B8%8A%E7%BD%91/", "url": "https://hitoli.com/2023/09/09/squid-stunnel-%E7%A7%91%E5%AD%A6%E4%B8%8A%E7%BD%91/", "title": "squid+stunnel 科学上网", "date_published": "2023-09-09T08:48:00.000Z", "content_html": "

# 前言

\n

科学上网的方法有多种,有很多第三方提供的免费方案,这些方案优缺点暂时不予讨论。实际工作生活中还是会有需要自己搭建的情况,这次介绍的是使用 squid+stunnel 方案进行搭建。

\n

# 准备

\n

一台可以访问外网的服务器,如香港的云主机并安装 Ubuntu 系统。

\n
# squid 部分
\n\n

1
apt-get install -y squid

\n\n
\n

生成用户文件

\n
\n

1
2
apt-get install apache2-utils
htpasswd -c /etc/squid/squid_user.txt 用户名

\n
\n

修改 squid 配置
\n 1、直接修改 /etc/squid/squid.conf 文件
\n 2、修改 /etc/squid/conf.d/debian.conf 文件
\n两种方式都一样,在底部加入以下代码

\n
\n

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
#dns服务器地址
dns_nameservers 8.8.8.8 8.8.4.4
dns_v4_first on
# 监听端口
http_port 3128
# 定义squid密码文件与ncsa_auth文件位置
auth_param basic program /usr/lib/squid/basic_ncsa_auth /etc/squid/squid_user.txt
# 认证进程的数量
auth_param basic children 15
# 认证对话框显示提示信息
auth_param basic realm Squid proxy-caching web server
# 认证有效期
auth_param basic credentialsttl 24 hours
# 是否区分用户名大小,off为不区分
auth_param basic casesensitive off
# 对定义的squid_user文件内的用户开启认证访问
acl 用户名 proxy_auth REQUIRED
# 允许squid_user文件内用户进行代理
http_access allow 用户名
# 顺序匹配,最后添加拒绝所有未允许的规则。不添加会发现,未匹配到的规则会被放行
http_access deny all
# 缓存设置
cache_dir ufs /var/spool/squid 100 16 256 read-only
cache_mem 0 MB
coredump_dir /var/spool/squid
# 配置高匿,不允许设置任何多余头信息,保持原请求header。
header_access Via deny all
header_access X-Forwarded-For deny all
header_access Server deny all
header_access X-Cache deny all
header_access X-Cache-Lookup deny all
forwarded_for off
via off

# logs相关配置
emulate_httpd_log on
logformat squid %{X-Forwarded-For}>h %ui %un [%tl] "%rm %ru HTTP/%rv" %Hs %<st "%{Referer}>h" "%{User-Agent}>h" %Ss:%Sh
access_log /var/log/squid/access.log squid
cache_log /var/log/squid/cache.log
cache_store_log /var/log/squid/store.log
logfile_rotate 20

########新版########
#Hide client ip
forwarded_for delete

#Turn off via header
via off

#Deny request for original source of a request
follow_x_forwarded_for deny all

#See below
request_header_access X-Forwarded-For deny all
########新版########

\n
\n

至次已经可以通过填写安装 squid 的服务器 ip 加端口 3128 加用户名密码进行代理访问了(通过访问 https://www.ip.cn/ 查看 ip 就会发现自己的出口 ip 已经变成了 squid 服务器的 ip 了)。但是要想科学上网还必须对代理的数据进行加密,否则访问外网还是会被我国的长城防火墙阻挡,所以还需要安装 stunnel 来实现此目的。

\n
\n
# stunnel 服务端部分
\n\n

1
apt-get install -y stunnel

\n\n

1
openssl req -new -x509 -days 3650 -nodes -out stunnel.pem -keyout stunnel.pem

\n\n

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
; 设置stunnel的pid文件路径
pid = /etc/stunnel/stunnel.pid
; 设置stunnel工作的用户(组)
setuid = root
setgid = root

; 开启日志等级:emerg (0), alert (1), crit (2), err (3), warning (4), notice (5), info (6), or debug (7)
debug = 7
; 日志文件路径
output = /etc/stunnel/stunnel.log

; 证书文件
cert = /etc/stunnel/stunnel.pem
; 私钥文件
key = /etc/stunnel/stunnel.pem

; 自定义服务名squid-proxy
[squid-proxy]
; 服务监听的端口,client要连接这个端口与server通信
accept = 1234(自定义)
; 服务要连接的端口,连接到squid的3128端口,将数据发给squid
connect = 3128

\n
# stunnel 客户端部分
\n
\n

可以安装在要代理的机器上,在需要代理的情况下再开启(代理地址填 127.0.0.1 加客户端监听端口)。也可以安装在国内的服务器上一直保持连接(代理信息填国内服务器 ip 加客户端监听端口)。本示例客户端为 windows 系统

\n
\n\n

https://www.stunnel.org/downloads.html

\n\n

1
2
3
4
5
6
7
8
9
10
11
[squid-proxy]
client = yes
; 监听3128端口,那么用户浏览器的代理设置就是 stunnel-client-ip:3128
accept = 3128
; 要连接到的stunnel server的ip与端口
connect = stunnel服务端ip:1234(服务端自定义端口)

; 需要验证对方发过来的证书
verify = 2
; 用来进行证书验证的文件(stunnel服务端生成的证书复制到以下目录并改名为stunnel-server.pem)
CAfile = C:\\Program Files (x86)\\stunnel\\config\\stunnel-server.pem

\n
\n

至次配置好代理 ip 为 stunnel 客户端 ip 加端口 3128 就可以正式科学上网了。如果只想对需要科学的 url 进行代理,可以通过安装 Proxy SwitchyOmega 插件实现(规则地址可通过 https://github.com/gfwlist/gfwlist 获取)。

\n
\n

\"\"
\n\"\"

\n", "tags": [ "生活", "技术分享", "stunnel", "科学上网", "squid", "代理" ] }, { "id": "https://hitoli.com/2023/09/09/Windows-11%E5%8F%B3%E9%94%AE%E8%8F%9C%E5%8D%95%E6%81%A2%E5%A4%8D%E8%80%81%E7%89%88%E6%9C%AC/", "url": "https://hitoli.com/2023/09/09/Windows-11%E5%8F%B3%E9%94%AE%E8%8F%9C%E5%8D%95%E6%81%A2%E5%A4%8D%E8%80%81%E7%89%88%E6%9C%AC/", "title": "Windows 11右键菜单恢复老版本", "date_published": "2023-09-09T08:38:00.000Z", "content_html": "

# 恢复方法

\n

1、按【Win+X】

\n

2、选择【终端管理员】

\n

3、输入以下命令并回车:
\n reg add "HKCU\\Software\\Classes\\CLSID\\{86ca1aa0-34aa-4e8b-a509-50c905bae2a2}\\InprocServer32" /f /ve

\n

4、重启电脑

\n", "tags": [ "Windows", "系统优化", "Windows 11" ] }, { "id": "https://hitoli.com/2023/07/08/%E8%A7%A3%E5%86%B3Lombok%E6%8A%A5%E9%94%99/", "url": "https://hitoli.com/2023/07/08/%E8%A7%A3%E5%86%B3Lombok%E6%8A%A5%E9%94%99/", "title": "解决Lombok报错", "date_published": "2023-07-08T02:51:00.000Z", "content_html": "

# 问题描述

\n
    \n
  1. 报错详情
  2. \n
\n
\n

java: You aren’t using a compiler supported by lombok, so lombok will not work and has been disabled.
\nYour processor is: com.sun.proxy.$Proxy26
\nLombok supports: OpenJDK javac, ECJ

\n
\n

\"\"

\n
    \n
  1. \n

    问题分析
    \n属于 lombok 编译不通过,原因可能是因为依赖没有更到最新版本

    \n
  2. \n
  3. \n

    解决办法
    \n在 IntelliJ IDEA 的全局配置 Compiler 中添加如下配置:
    \n

    1
    -Djps.track.ap.dependencies=false

    \n\"\"

    \n
  4. \n
\n", "tags": [ "工作", "解决问题", "Lombok", "IntelliJ", "IDEA" ] } ] }