Optimizando costos y redimiento de AWS Lambda


Optimizar el costo de AWS Lambda no es trivial, se sabe que el CPU asignado a la función sube a medida que la memoria aumenta.

AWS Lambda cobra por memoria asignada y por tiempo de ejecución, entonces si podemos bajar el tiempo subiendo la memoria, sería posible ahorrar dinero y además aumentar el rendimiento.

Una forma de saber más o menos cómo está funcionando la función es analizar los logs de CloudWatch logs, podemos bajar los logs y mirar como se distribuyen las duraciones para la memoria asignada.

Cada log de reporte de lambda se ve así:

REPORT RequestId: 0aa3270b-f7a7-506b-8043-71cc984475d2	Duration: 1323.20 ms	Billed Duration: 1400 ms	Memory Size: 1280 MB	Max Memory Used: 169 MB	Init Duration: 173.48 ms

Como podemos observar, este log contiene cuánto tiempo fue cobrado y cuánta memoria es asignada a la función.

Por suerte, no necesitamos implementar todo esto sino usar una herramienta ya existente. Para instalarla ejecuta:

$ go get -u -v github.com/dcu/optimize-lambda-cost

Y luego ejecutala sobre tu función Lambda:

$ optimize-lambda-cost analyze -p profile-with-access-to-watchlogs my-function --since="4 hours ago"

Y ese comando debería retornar algo como:

>> Analyzing stats for memory bucket: 512 MB (total requests: 2109 )
> Top requests per billed duration
1700 ms: 29 (1.38%)
1800 ms: 28 (1.33%)
1900 ms: 73 (3.46%)
2000 ms: 125 (5.93%)
2100 ms: 145 (6.88%)
2200 ms: 139 (6.59%)
2300 ms: 97 (4.60%)
2400 ms: 134 (6.35%)
2500 ms: 103 (4.88%)
2600 ms: 92 (4.36%)
2700 ms: 71 (3.37%)
2800 ms: 73 (3.46%)
2900 ms: 90 (4.27%)
3000 ms: 72 (3.41%)
3100 ms: 57 (2.70%)
3200 ms: 44 (2.09%)
3300 ms: 37 (1.75%)
3400 ms: 25 (1.19%)
3500 ms: 27 (1.28%)
3600 ms: 37 (1.75%)
3700 ms: 19 (0.90%)
3800 ms: 22 (1.04%)
4000 ms: 23 (1.09%)
4100 ms: 19 (0.90%)
4200 ms: 17 (0.81%)
4300 ms: 31 (1.47%)
4400 ms: 22 (1.04%)
4500 ms: 18 (0.85%)
4800 ms: 17 (0.81%)
4900 ms: 15 (0.71%)
5200 ms: 21 (1.00%)
Estimated cost per million requests: 56.01$

> Distribution for durations
1th percentile 1684.43 ms billed: 1700 ms
25th percentile 2225.38 ms billed: 2300 ms
50th percentile 2769.69 ms billed: 2800 ms
75th percentile 4095.46 ms billed: 4100 ms
99th percentile 18443.18 ms billed: 18500 ms

> Distribution for used memory
1th percentile: 215.0 MB
25th percentile: 257.0 MB
50th percentile: 268.0 MB
75th percentile: 276.0 MB
99th percentile: 285.0 MB

> Suggested memory based on your usage
Suggestion for 1th percentile: 1344 MB
Suggestion for 25th percentile: 1728 MB
Suggestion for 50th percentile: 2048 MB
Suggestion for 75th percentile: 2880 MB
Suggestion for 99th percentile: 3008 MB

Ahora, duplicando la memoria vemos que el precio se mantiene el mismo rango mientras los tiempos mejoran significativamente:

>> Analyzing stats for memory bucket: 1024 MB (total requests: 574 )
> Top requests per billed duration
1000 ms: 16 (2.79%)
1100 ms: 63 (10.98%)
1200 ms: 97 (16.90%)
1300 ms: 76 (13.24%)
1400 ms: 55 (9.58%)
1500 ms: 60 (10.45%)
1600 ms: 54 (9.41%)
1700 ms: 35 (6.10%)
1800 ms: 15 (2.61%)
1900 ms: 16 (2.79%)
2200 ms: 10 (1.74%)
Estimated cost per million requests: 51.95$

> Distribution for durations
1th percentile 932.75 ms billed: 1000 ms
25th percentile 1162.63 ms billed: 1200 ms
50th percentile 1339.48 ms billed: 1400 ms
75th percentile 1611.81 ms billed: 1700 ms
99th percentile 7272.52 ms billed: 7300 ms

> Distribution for used memory
1th percentile: 177.0 MB
25th percentile: 244.0 MB
50th percentile: 249.0 MB
75th percentile: 259.0 MB
99th percentile: 264.0 MB

> Suggested memory based on your usage
Suggestion for 1th percentile: 1152 MB
Suggestion for 25th percentile: 1280 MB
Suggestion for 50th percentile: 1408 MB
Suggestion for 75th percentile: 1600 MB
Suggestion for 99th percentile: 3008 MB

Ahora probemos 4X respecto al original:

>> Analyzing stats for memory bucket: 2048 MB (total requests: 1422 )
> Top requests per billed duration
700 ms: 53 (3.73%)
800 ms: 242 (17.02%)
900 ms: 303 (21.31%)
1000 ms: 260 (18.28%)
1100 ms: 224 (15.75%)
1200 ms: 138 (9.70%)
1300 ms: 66 (4.64%)
1400 ms: 33 (2.32%)
1500 ms: 31 (2.18%)
Estimated cost per million requests: 68.05$

> Distribution for durations
1th percentile 662.37 ms billed: 700 ms
25th percentile 823.33 ms billed: 900 ms
50th percentile 943.6 ms billed: 1000 ms
75th percentile 1087.41 ms billed: 1100 ms
99th percentile 2006.12 ms billed: 2100 ms

> Distribution for used memory
1th percentile: 203.0 MB
25th percentile: 255.0 MB
50th percentile: 271.0 MB
75th percentile: 284.0 MB
99th percentile: 378.0 MB

> Suggested memory based on your usage
Suggestion for 1th percentile: 1472 MB
Suggestion for 25th percentile: 1600 MB
Suggestion for 50th percentile: 1664 MB
Suggestion for 75th percentile: 1728 MB
Suggestion for 99th percentile: 2368 MB

Aumentando 4 veces la memoria, el costo sólo aumenta 0.3X mientras los tiempos de respuesta mejoran 9X en el 99 percentil. Finalmente, con una cantidad de memoria media obtenemos:

>> Analyzing stats for memory bucket: 1280 MB (total requests: 1660 )
> Top requests per billed duration
900 ms: 154 (9.28%)
1000 ms: 370 (22.29%)
1100 ms: 285 (17.17%)
1200 ms: 253 (15.24%)
1300 ms: 180 (10.84%)
1400 ms: 132 (7.95%)
1500 ms: 101 (6.08%)
1600 ms: 57 (3.43%)
Estimated cost per million requests: 48.72$

> Distribution for durations
1th percentile 805.22 ms billed: 900 ms
25th percentile 965.46 ms billed: 1000 ms
50th percentile 1111.78 ms billed: 1200 ms
75th percentile 1300.06 ms billed: 1400 ms
99th percentile 2551.16 ms billed: 2600 ms

> Distribution for used memory
1th percentile: 209.0 MB
25th percentile: 266.0 MB
50th percentile: 274.0 MB
75th percentile: 277.0 MB
99th percentile: 292.0 MB

> Suggested memory based on your usage
Suggestion for 1th percentile: 1216 MB
Suggestion for 25th percentile: 1280 MB
Suggestion for 50th percentile: 1408 MB
Suggestion for 75th percentile: 1536 MB
Suggestion for 99th percentile: 2304 MB

Aquí el costo es un poco menor y el tiempo de respuesta mucho mejor.

Es importante mencionar que esta solución no funciona en todos los casos, por ejemplo si el tiempo de la función es muy dependiente de una petición de Internet no hay mucho que se pueda optimizar.

aws  lambda 

Ver también