vllm.compilation.caching ¶
VllmSerializableFunction ¶
Bases: SerializableCallable
A wrapper around a compiled function by vllm. It will forward the tensor inputs to the compiled function and return the result. It also implements a serialization interface to support PyTorch's precompile with custom backend, so that we can save and load the compiled function on disk. There's no need to wrap around the compiled function if we don't want to serialize them in particular cases. Right now serialization for the custom backend is done via serializing the Dynamo fx graph plus example inputs.
Source code in vllm/compilation/caching.py
26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 |
|
__call__ ¶
__init__ ¶
Source code in vllm/compilation/caching.py
deserialize_compile_artifacts classmethod
¶
deserialize_compile_artifacts(
data: bytes,
) -> VllmSerializableFunction
Source code in vllm/compilation/caching.py
serialize_compile_artifacts classmethod
¶
serialize_compile_artifacts(
compiled_fn: VllmSerializableFunction,
) -> bytes
Source code in vllm/compilation/caching.py
_compute_code_hash ¶
Source code in vllm/compilation/caching.py
_compute_code_hash_with_content ¶
Source code in vllm/compilation/caching.py
compilation_config_hash_factors ¶
compilation_config_hash_factors(
vllm_config: VllmConfig,
) -> list[str]