OkHttp Http/1.x 与 Http/2.0 相关源码分析

源码解析

Posted by Mio4kon on 2019-04-12

写这篇博客的目的主要是想了解 OkhttpHttp/2.0上是如何实现首部压缩和多路复用的,
先从 Ok 主要核心的几个Interceptor 来开头(用到了责任链的设计模式,具体就不展开了), 下面先简单说下这几个Interceptor主要做了什么.

RetryAndFollowUpInterceptor

主要职责:

  1. 判断是否可以进行重试
  2. 处理返回的响应码非200情况(例如 301的时候会取header的location转成新的url,然后进行重定向请求)
  3. 创建 StreamAllocation 对象(ConnectInterceptor用到)

关于StreamAllocation 类,主要承载了 Connections,Streams,Calls

  1. Connections :socket连接, 可以取消连接
  2. Streams: 建立在连接层之上的Http请求/响应对,Http1.x对应1个stream,Http2.0会对应多个stream
  3. Calls:Streams的一个逻辑序列

这里有个小插曲,在判断是否可以重试的逻辑中有关于ConnectionShutdownException的特殊处理,主要是处理以前老版本ok的一个bug,具体可以参考:okhttp和http 2.0相遇引发的”血案” - 知乎

BridgeInterceptor

主要职责:

  1. 修改填充Request的Header(例如”Connection:Keep-Alive”)
  2. 对返回的Response做处理(save cookie,gzip解压等)

CacheInterceptor

主要职责:
顾名思义,处理Response缓存的(DiskLruCache)

ConnectInterceptor

主要职责:

  1. 从ConnectionPool中找到匹配的Connection,如果找到就把当前StreamAllocation添加到connection的allocations中(acquire
    方法)
  2. 判断匹配的Connection是否可用,不可用则释放该连接,并继续寻找连接池的连接
  3. 找不到可复用的Connection,则创建新的Connection,并连接(执行tcp和Tls),并将该连接放入连接池
  4. 通过Connection生成Stream,也就是HttpCodec(H2Connection对应Http2Codec,其他则是Http1Codec),并继续链式传递到下一个拦截器

CallServerInterceptor

主要职责:

其实很简单,就是通过HttpCodec去读写Headers和Body,中间也会有关于code的一些判断

Http1Codec 与 Http2Codec 实现区别

writeRequestHeaders

[Http1Codec.java]

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
@Override public void writeRequestHeaders(Request request) throws IOException {
String requestLine = RequestLine.get(
request, streamAllocation.connection().route().proxy().type());
writeRequest(request.headers(), requestLine);
}
/** Returns bytes of a request header for sending on an HTTP transport. */
public void writeRequest(Headers headers, String requestLine) throws IOException {
if (state != STATE_IDLE) throw new IllegalStateException("state: " + state);
sink.writeUtf8(requestLine).writeUtf8("\r\n");
for (int i = 0, size = headers.size(); i < size; i++) {
sink.writeUtf8(headers.name(i))
.writeUtf8(": ")
.writeUtf8(headers.value(i))
.writeUtf8("\r\n");
}
sink.writeUtf8("\r\n");
state = STATE_OPEN_REQUEST_BODY;
}

很直接明了的写入requestLineheaders

[Http2Codec.java]

1
2
3
4
5
6
7
8
9
@Override public void writeRequestHeaders(Request request) throws IOException {
if (stream != null) return;
boolean hasRequestBody = request.body() != null;
List<Header> requestHeaders = http2HeadersList(request);
stream = connection.newStream(requestHeaders, hasRequestBody);
stream.readTimeout().timeout(chain.readTimeoutMillis(), TimeUnit.MILLISECONDS);
stream.writeTimeout().timeout(chain.writeTimeoutMillis(), TimeUnit.MILLISECONDS);
}

stream 为空的时候直接返回,说明只有第一次创建 stream 的时候才会去写入请求头.

[Http2Connection.java]

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
private Http2Stream newStream(
int associatedStreamId, List<Header> requestHeaders, boolean out) throws IOException {
boolean outFinished = !out;
boolean inFinished = false;
boolean flushHeaders;
Http2Stream stream;
int streamId;
synchronized (writer) {
synchronized (this) {
if (nextStreamId > Integer.MAX_VALUE / 2) {
shutdown(REFUSED_STREAM);
}
if (shutdown) {
throw new ConnectionShutdownException();
}
//生成streamId,客户端id 为奇数
streamId = nextStreamId;
nextStreamId += 2;
//创建 stream 对象
stream = new Http2Stream(streamId, this, outFinished, inFinished, null);
flushHeaders = !out || bytesLeftInWriteWindow == 0L || stream.bytesLeftInWriteWindow == 0L;
if (stream.isOpen()) {
streams.put(streamId, stream);
}
}
if (associatedStreamId == 0) {
writer.synStream(outFinished, streamId, associatedStreamId, requestHeaders);
} else if (client) {
throw new IllegalArgumentException("client streams shouldn't have associated stream IDs");
} else { // HTTP/2 has a PUSH_PROMISE frame.
writer.pushPromise(associatedStreamId, streamId, requestHeaders);
}
}
if (flushHeaders) {
writer.flush();
}
return stream;
}

说明:

  1. 客户端的stream id 必须为奇数(https://http2.github.io/http2-spec/#HEADERS)
    Streams are identified with an unsigned 31-bit integer. Streams initiated by a client MUST use odd-numbered stream identifiers;
  2. 创建的 stream 会放到H2Connectionstreams集合中去,这也说明一个 H2Connection是对应多个 stream 流

  3. HTTP 2.0无论客户端还是服务端都可以创建 stream,前者通过发送 HEADERS来创建新的 stream,后者是通过 PUSH_PROMISE.这也是为什么最后会有两个分支 writer.synStreamwriter.pushPromise,这里只分析客户端发送 HEADERS 的情况.

[Http2Writer.java]

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
public synchronized void synStream(boolean outFinished, int streamId,
int associatedStreamId, List<Header> headerBlock) throws IOException {
if (closed) throw new IOException("closed");
headers(outFinished, streamId, headerBlock);
}
void headers(boolean outFinished, int streamId, List<Header> headerBlock) throws IOException {
if (closed) throw new IOException("closed");
//讲 headers 写入到 hpackBuffer中
hpackWriter.writeHeaders(headerBlock);
long byteCount = hpackBuffer.size();
int length = (int) Math.min(maxFrameSize, byteCount);
byte type = TYPE_HEADERS;
byte flags = byteCount == length ? FLAG_END_HEADERS : 0;
if (outFinished) flags |= FLAG_END_STREAM;
//写入streamId,type,flags
frameHeader(streamId, length, type, flags);
//写入socket流
sink.write(hpackBuffer, length);
if (byteCount > length) writeContinuationFrames(streamId, byteCount - length);
}

说明:

  1. hpack是 H2 新增的头部压缩的特性(https://tools.ietf.org/html/rfc7541)
  2. frame帧是 H2 的最小传输单位,所有的 frame格式在开始都必须包含固定的 9字节的头(length,type,flag,streamid),对应上面的 frameHeader方法,具体格式如下图
  3. 最后将压缩后的 hpackBuffer 写入

readResponseHeaders区别

[Http1Codec.java]
具体代码不贴了,主要就是从 socket 流中读取 StatusLineheaders

[Http2Codec.java]

1
2
3
4
5
6
7
8
9
10
@Override public Response.Builder readResponseHeaders(boolean expectContinue) throws IOException {
//读取 headers
Headers headers = stream.takeHeaders();
//生成 resp builder
Response.Builder responseBuilder = readHttp2HeadersList(headers, protocol);
if (expectContinue && Internal.instance.code(responseBuilder) == HTTP_CONTINUE) {
return null;
}
return responseBuilder;
}

[Http2Stream.java]

1
2
3
4
5
6
7
8
9
10
11
12
13
14
public synchronized Headers takeHeaders() throws IOException {
readTimeout.enter();
try {
while (headersQueue.isEmpty() && errorCode == null) {
waitForIo();
}
} finally {
readTimeout.exitAndThrowIfTimedOut();
}
if (!headersQueue.isEmpty()) {
return headersQueue.removeFirst();
}
throw new StreamResetException(errorCode);
}

发现读取 header 数据是一直等待headersQueue中的数据的,这也说明有其他的线程在做放入数据到headersQueue的操作.

我们搜一下发现有两处 add 操作,一个是receiveHeaders方法,还有一个是在Http2Stream的构造方法中,但这两处实际上都收口在Http2Connection$ReaderRunnableheader 方法中.

[Http2Connection$ReaderRunnable]

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
@Override public void headers(boolean inFinished, int streamId, int associatedStreamId,
List<Header> headerBlock) {
...
Http2Stream stream;
synchronized (Http2Connection.this) {
stream = getStream(streamId);
...
// Create a stream.
Headers headers = Util.toHeaders(headerBlock);
final Http2Stream newStream = new Http2Stream(streamId, Http2Connection.this,
false, inFinished, headers);
lastGoodStreamId = streamId;
streams.put(streamId, newStream);
...
// Update an existing stream.
stream.receiveHeaders(headerBlock);
if (inFinished) stream.receiveFin();
}

代码有点多,删掉一些不是特别重要的,可以发现这里实际上是对已经取得的headerBlock进行处理:创建新的stream,并接收 headers,此接收并非是读取 socket 流中的数据,而是将已经取到的数据放入之前提到的headersQueue当中.最后如果 inFinished,会唤醒等待并最终将该 stream 移出.

那么到底是在什么时候读取的headerBlock以及什么时候执行的ReaderRunnable呢?

第二个问题之后在分析connect中会提到.我们先看看第一个问题:什么时候读取的headerBlock?

其实就是在ReaderRunnableexecute方法当中.

[Http2Connection$ReaderRunnable]

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
ReaderRunnable(Http2Reader reader) {
super("OkHttp %s", hostname);
this.reader = reader;
}
@Override protected void execute() {
ErrorCode connectionErrorCode = ErrorCode.INTERNAL_ERROR;
ErrorCode streamErrorCode = ErrorCode.INTERNAL_ERROR;
try {
reader.readConnectionPreface(this);
//一直去读取下一帧数据
while (reader.nextFrame(false, this)) {
}
connectionErrorCode = ErrorCode.NO_ERROR;
streamErrorCode = ErrorCode.CANCEL;
} catch (IOException e) {
connectionErrorCode = ErrorCode.PROTOCOL_ERROR;
streamErrorCode = ErrorCode.PROTOCOL_ERROR;
} finally {
try {
close(connectionErrorCode, streamErrorCode);
} catch (IOException ignored) {
}
Util.closeQuietly(reader);
}
}

[Http2Reader.java]

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
public boolean nextFrame(boolean requireSettings, Handler handler) throws IOException {
try {
//固定的 9 个字节
source.require(9); // Frame header size
} catch (IOException e) {
return false; // This might be a normal socket close.
}
// 0 1 2 3
// 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
// +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
// | Length (24) |
// +---------------+---------------+---------------+
// | Type (8) | Flags (8) |
// +-+-+-----------+---------------+-------------------------------+
// |R| Stream Identifier (31) |
// +=+=============================================================+
// | Frame Payload (0...) ...
// +---------------------------------------------------------------+
//读取Length,type,flag,streamId 以及对应类型帧的内容
int length = readMedium(source);
if (length < 0 || length > INITIAL_MAX_FRAME_SIZE) {
throw ioException("FRAME_SIZE_ERROR: %s", length);
}
byte type = (byte) (source.readByte() & 0xff);
if (requireSettings && type != TYPE_SETTINGS) {
throw ioException("Expected a SETTINGS frame but was %s", type);
}
byte flags = (byte) (source.readByte() & 0xff);
int streamId = (source.readInt() & 0x7fffffff); // Ignore reserved bit.
if (logger.isLoggable(FINE)) logger.fine(frameLog(true, streamId, length, type, flags));
switch (type) {
case TYPE_DATA:
readData(handler, length, flags, streamId);
break;
case TYPE_HEADERS:
readHeaders(handler, length, flags, streamId);
break;
case TYPE_PRIORITY:
readPriority(handler, length, flags, streamId);
break;
case TYPE_RST_STREAM:
readRstStream(handler, length, flags, streamId);
break;
case TYPE_SETTINGS:
readSettings(handler, length, flags, streamId);
break;
case TYPE_PUSH_PROMISE:
readPushPromise(handler, length, flags, streamId);
break;
case TYPE_PING:
readPing(handler, length, flags, streamId);
break;
case TYPE_GOAWAY:
readGoAway(handler, length, flags, streamId);
break;
case TYPE_WINDOW_UPDATE:
readWindowUpdate(handler, length, flags, streamId);
break;
default:
// Implementations MUST discard frames that have unknown or unsupported types.
source.skip(length);
}
return true;
}

这段代码中的注释是不是很熟悉,之前分析发送 headers 帧的时候提到过 frame 的格式,那么服务端返回的自然也是遵循这种格式的帧,关于帧的类型有下面 10 种(https://http2.github.io/http2-spec/#rfc.toc)

-w366

看下readHeaders 方法:

[Http2Reader.java]

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
private void readHeaders(Handler handler, int length, byte flags, int streamId)
throws IOException {
if (streamId == 0) throw ioException("PROTOCOL_ERROR: TYPE_HEADERS streamId == 0");
boolean endStream = (flags & FLAG_END_STREAM) != 0;
short padding = (flags & FLAG_PADDED) != 0 ? (short) (source.readByte() & 0xff) : 0;
if ((flags & FLAG_PRIORITY) != 0) {
readPriority(handler, streamId);
length -= 5; // account for above read.
}
length = lengthWithoutPadding(length, flags, padding);
List<Header> headerBlock = readHeaderBlock(length, padding, flags, streamId);
//回到之前看到的处理headers逻辑
handler.headers(endStream, streamId, -1, headerBlock);
}
private List<Header> readHeaderBlock(int length, short padding, byte flags, int streamId)
throws IOException {
continuation.length = continuation.left = length;
continuation.padding = padding;
continuation.flags = flags;
continuation.streamId = streamId;
// TODO: Concat multi-value headers with 0x0, except COOKIE, which uses 0x3B, 0x20.
// http://tools.ietf.org/html/draft-ietf-httpbis-http2-17#section-8.1.2.5
hpackReader.readHeaders();
return hpackReader.getAndResetHeaderList();
}

主要逻辑就是赋值以及从 hpackReader读取 header 数据.之前提到 hpack 是H2头部压缩的一个特性,我们简单看下它怎么取得数据

[Hpack.java]

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
void readHeaders() throws IOException {
while (!source.exhausted()) {
int b = source.readByte() & 0xff;
if (b == 0x80) { // 10000000
throw new IOException("index == 0");
} else if ((b & 0x80) == 0x80) { // 1NNNNNNN
int index = readInt(b, PREFIX_7_BITS);
readIndexedHeader(index - 1);
} else if (b == 0x40) { // 01000000
readLiteralHeaderWithIncrementalIndexingNewName();
} else if ((b & 0x40) == 0x40) { // 01NNNNNN
int index = readInt(b, PREFIX_6_BITS);
readLiteralHeaderWithIncrementalIndexingIndexedName(index - 1);
} else if ((b & 0x20) == 0x20) { // 001NNNNN
maxDynamicTableByteCount = readInt(b, PREFIX_5_BITS);
if (maxDynamicTableByteCount < 0
|| maxDynamicTableByteCount > headerTableSizeSetting) {
throw new IOException("Invalid dynamic table size update " + maxDynamicTableByteCount);
}
adjustDynamicTableByteCount();
} else if (b == 0x10 || b == 0) { // 000?0000 - Ignore never indexed bit.
readLiteralHeaderWithoutIndexingNewName();
} else { // 000?NNNN - Ignore never indexed bit.
int index = readInt(b, PREFIX_4_BITS);
readLiteralHeaderWithoutIndexingIndexedName(index - 1);
}
}
}

说明:

  1. 该类里有两个表: STATIC_HEADER_TABLEdynamicTable
  2. STATIC_HEADER_TABLE是预先定义好的表,如果不在静态表中就需要查找动态表
  3. 动态表的最大字节长度是由SETTING帧的SETTINGS_HEADER_TABLE_SIZE来控制的,ok 内部要求不能超过16k
  4. 是否使用 Huffman编码,是根据H是否为1 来确定的,如下


至此我们已经大致简单的了解了 H1Codec 和 H2Code 在发送和接收上的区别,前者相对直观,后者主要增加了两个重要特性:多路复用(传输单元为帧)以及首部压缩(Hpack)

connect具体怎么实现tcp和tls

之前提到在 ConnectInterceptor中会调用newStream方法,其中会在connectionPool中寻找健康的连接,当找不到可以使用的连接会创建一个RealConnection对象,并调用connect方法,我们来看下具体做了什么.

[RealConnection.java]

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
public void connect(int connectTimeout, int readTimeout, int writeTimeout,
int pingIntervalMillis, boolean connectionRetryEnabled, Call call,
EventListener eventListener) {
if (protocol != null) throw new IllegalStateException("already connected");
RouteException routeException = null;
//connectionSpecs包含 CipherSuite和 TlsVersion
List<ConnectionSpec> connectionSpecs = route.address().connectionSpecs();
ConnectionSpecSelector connectionSpecSelector = new ConnectionSpecSelector(connectionSpecs);
...
while (true) {
try {
//判断是需要建立隧道(有代理都会走进这个逻辑)
if (route.requiresTunnel()) {
connectTunnel(connectTimeout, readTimeout, writeTimeout, call, eventListener);
if (rawSocket == null) {
// We were unable to connect the tunnel but properly closed down our resources.
break;
}
} else {
//不需要Tunnel的话则直接连接socket
connectSocket(connectTimeout, readTimeout, call, eventListener);
}
//socket 连接之后
//建立协议
establishProtocol(connectionSpecSelector, pingIntervalMillis, call, eventListener);
eventListener.connectEnd(call, route.socketAddress(), route.proxy(), protocol);
break;
} catch (IOException e) {
closeQuietly(socket);
closeQuietly(rawSocket);
socket = null;
rawSocket = null;
source = null;
sink = null;
handshake = null;
protocol = null;
http2Connection = null;
eventListener.connectFailed(call, route.socketAddress(), route.proxy(), null, e);
...
if (http2Connection != null) {
synchronized (connectionPool) {
allocationLimit = http2Connection.maxConcurrentStreams();
}
}
}
}

主要做了下面几件事:

  1. 获取connectionSpecs
  2. 判断是否需要建立隧道,如果需要则走connectTunnel逻辑,否则则直接走connectSocket逻辑
  3. socket 连接之后调用establishProtocol方法去建立协议
  4. 如果是 h2connection 则多设置allocationLimit属性

这里我们看建立隧道的流程

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
private void connectTunnel(int connectTimeout, int readTimeout, int writeTimeout, Call call,
EventListener eventListener) throws IOException {
//创建一个Tunnel Request
Request tunnelRequest = createTunnelRequest();
HttpUrl url = tunnelRequest.url();
//最大 21 次隧道连接
for (int i = 0; i < MAX_TUNNEL_ATTEMPTS; i++) {
//连接 socket
connectSocket(connectTimeout, readTimeout, call, eventListener);
//写入CONNECT信息,并将 request 发给隧道的另一方, 返回200代表隧道连接成功,407 则表示需要认证
tunnelRequest = createTunnel(readTimeout, writeTimeout, tunnelRequest, url);
//tunnelRequest为null代表连接成功
if (tunnelRequest == null) break; // Tunnel successfully created.
// The proxy decided to close the connection after an auth challenge. We need to create a new
// connection, but this time with the auth credentials.
closeQuietly(rawSocket);
rawSocket = null;
sink = null;
source = null;
eventListener.connectEnd(call, route.socketAddress(), route.proxy(), null);
}
}

主要做的几件事:

  1. 创建隧道请求
  2. 连接 socket
  3. 将CONNECT信息发送给隧道另一方

所以我们知道无论走哪个分支都会发起 socket 连接(这不是废话吗…)

我们再来看建立协议方法establishProtocol

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
private void establishProtocol(ConnectionSpecSelector connectionSpecSelector,
int pingIntervalMillis, Call call, EventListener eventListener) throws IOException {
//sslSocketFactory 为空时
if (route.address().sslSocketFactory() == null) {
if (route.address().protocols().contains(Protocol.H2_PRIOR_KNOWLEDGE)) {
//明文 H2 协议
socket = rawSocket;
protocol = Protocol.H2_PRIOR_KNOWLEDGE;
startHttp2(pingIntervalMillis);
return;
}
socket = rawSocket;
protocol = Protocol.HTTP_1_1;
return;
}
eventListener.secureConnectStart(call);
//tls
connectTls(connectionSpecSelector);
eventListener.secureConnectEnd(call, handshake);
if (protocol == Protocol.HTTP_2) {
//开启 Http2
startHttp2(pingIntervalMillis);
}
}

主要其实就做了两件事: connectTlsstartHttp2

先看下connectTls

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
private void connectTls(ConnectionSpecSelector connectionSpecSelector) throws IOException {
Address address = route.address();
SSLSocketFactory sslSocketFactory = address.sslSocketFactory();
boolean success = false;
SSLSocket sslSocket = null;
try {
//基于原来的 socket创建一个 sslsocket
sslSocket = (SSLSocket) sslSocketFactory.createSocket(
rawSocket, address.url().host(), address.url().port(), true /* autoClose */);
//配置sslsocket
ConnectionSpec connectionSpec = connectionSpecSelector.configureSecureSocket(sslSocket);
if (connectionSpec.supportsTlsExtensions()) {
//tls扩展参数
Platform.get().configureTlsExtensions(
sslSocket, address.url().host(), address.protocols());
}
// tls握手
sslSocket.startHandshake();
//获取SSLSession(包含传回来cipersuite,TLs主秘钥等等)
SSLSession sslSocketSession = sslSocket.getSession();
Handshake unverifiedHandshake = Handshake.get(sslSocketSession);
//验证证书,看该 host 是否可信
if (!address.hostnameVerifier().verify(address.url().host(), sslSocketSession)) {
X509Certificate cert = (X509Certificate) unverifiedHandshake.peerCertificates().get(0);
throw new SSLPeerUnverifiedException("Hostname " + address.url().host() + " not verified:"
+ "\n certificate: " + CertificatePinner.pin(cert)
+ "\n DN: " + cert.getSubjectDN().getName()
+ "\n subjectAltNames: " + OkHostnameVerifier.allSubjectAltNames(cert));
}
// Check that the certificate pinner is satisfied by the certificates presented.
address.certificatePinner().check(address.url().host(),
unverifiedHandshake.peerCertificates());
// Success! Save the handshake and the ALPN protocol.
//server 端可能支持的协议
String maybeProtocol = connectionSpec.supportsTlsExtensions()
? Platform.get().getSelectedProtocol(sslSocket)
: null;
socket = sslSocket;
source = Okio.buffer(Okio.source(socket));
sink = Okio.buffer(Okio.sink(socket));
handshake = unverifiedHandshake;
protocol = maybeProtocol != null
? Protocol.get(maybeProtocol)
: Protocol.HTTP_1_1;
success = true;
} catch (AssertionError e) {
if (Util.isAndroidGetsocknameError(e)) throw new IOException(e);
throw e;
} finally {
if (sslSocket != null) {
Platform.get().afterHandshake(sslSocket);
}
if (!success) {
closeQuietly(sslSocket);
}
}
}

主要做了下面几件事:

  1. 基于原来的 socket创建一个 sslsocket
  2. 配置sslsocket和扩展tls参数
  3. tls 握手
  4. 获取远端传回来cipersuite,TLs主秘钥等等
  5. 验证远端证书

结束语

至此 OkHttp 的源码分析基本完成了,其实Ok库的整体设计还是比较简单的,但是每一个功能点都会延伸出有很多技术细节,而这些技术细节可能需要用或者是参考的时候再具体分析对应的源码.如最近有同事就问Ok库的DNS解析如果返回多个ip,内部怎么选择? 其实这个问题就是相对比较细节的具体代码其实很容易就找到在RouteSelector类中,可以看出Ok 会尽量避免使用之前连接失败过的ip

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
while (hasNextProxy()) {
// Postponed routes are always tried last. For example, if we have 2 proxies and all the
// routes for proxy1 should be postponed, we'll move to proxy2. Only after we've exhausted
// all the good routes will we attempt the postponed routes.
Proxy proxy = nextProxy();
for (int i = 0, size = inetSocketAddresses.size(); i < size; i++) {
Route route = new Route(address, proxy, inetSocketAddresses.get(i));
if (routeDatabase.shouldPostpone(route)) {
postponedRoutes.add(route);
} else {
routes.add(route);
}
}
if (!routes.isEmpty()) {
break;
}
}
if (routes.isEmpty()) {
// We've exhausted all Proxies so fallback to the postponed routes.
routes.addAll(postponedRoutes);
postponedRoutes.clear();
}