feat: Improve performance of atob and btoa#1503
Conversation
|
I've found how to test this -- ON the issue linked from in #1494 there's an example. I made some modifications -- below is a minimum testable example: big_base64_data.txt Put those in the same directory. polyfills.js is from a core-js-builder result from the script: and then I changed the headers of the functions in the bundle like this --- polyfills.js 2026-01-25 21:49:44
+++ /Users/johnzhou/Downloads/decode/polyfills.js 2026-01-25 21:42:05
@@ -147,11 +147,11 @@
// `atob` method
// https://html.spec.whatwg.org/multipage/webappapis.html#dom-atob
-$({ global: true, bind: true, enumerable: true, forced: FORCED }, {
- atob: function atob(data) {
+$({ global: true, bind: true, enumerable: true, forced: true }, {
+ atob1: function atob(data) {
validateArgumentsLength(arguments.length, 1);
// `webpack` dev server bug on IE global methods - use call(fn, global, ...)
- if (BASIC && !NO_SPACES_IGNORE && !NO_ENCODING_CHECK) return call($atob, globalThis, data);
+ // if (BASIC && !NO_SPACES_IGNORE && !NO_ENCODING_CHECK) return call($atob, globalThis, data);
var string = replace(toString(data), whitespaces, '');
var output = new $Array(Math.ceil(string.length * 3 / 4));
var outputIndex = 0;
@@ -1763,11 +1763,11 @@
// `btoa` method
// https://html.spec.whatwg.org/multipage/webappapis.html#dom-btoa
-$({ global: true, bind: true, enumerable: true, forced: !BASIC || NO_ARG_RECEIVING_CHECK || WRONG_ARG_CONVERSION || WRONG_ARITY }, {
- btoa: function btoa(data) {
+$({ global: true, bind: true, enumerable: true, forced: true }, {
+ btoa1: function btoa(data) {
validateArgumentsLength(arguments.length, 1);
// `webpack` dev server bug on IE global methods - use call(fn, global, ...)
- if (BASIC) return call($btoa, globalThis, toString(data));
+ // if (BASIC) return call($btoa, globalThis, toString(data));
var string = toString(data);
var output = new $Array(Math.ceil(string.length * 4 / 3));
var outputIndex = 0;Note that the if(BASIC... lines can be uncommented to compare results with the builtin cannocial function. In decode.html put the things from big_base64_data in. The polyfills I've experimented with -- I changed the things in the bundle to the old version and then reverted it to the new version I've posted here. The result is 28.88ms to 14.26ms in encoding with 20 runs, and 83.29ms to 56.27ms with 20 runs (with the runs specified in the textfields below the input at the top). |
|
@zloirock This should be ready to take a look at now. |
zloirock
left a comment
There was a problem hiding this comment.
Math.ceilshould be cached locally likefromCharCode.- I'm not sure about this logic determination of the array length - if I understand correctly, in some cases, it will be more than the real number of elements and the result array will have holes that could cause deoptimization for
join. Array#joinshould be used as the rest prototype methods - withuncurryThis.- There are many other methods that could have similar problems - for example,
Uint8Arraybase64 / hex methods - you could also improve them. However, sure, this is not a requirement for the adoption of the PR.
This is true for the atob decoding -- good catch. For btoa we actually underallocate which means that the assignment might reallocate and cause bad performance. Would it be good to add comments inline explaining the math? |
|
@zloirock Tested on Windows 11 with IE (by using a .hta file with a link in it to hack out IE) and also on macOS with latest Chrome, this PR now seems to offer some significant speedups. However my setups are slow so I'd appreciate you also testing this. Ready for another review. |
This reverts commit 9be5d63.
|
Thanks. |
|
Glad to help micro improve performance of some polyfills! |
|
Next up for performance improvements is #1510, with UInt8Array hex functions. |
Based on #1464's work by @Chanran. [EDIT -- i got pr # wrong]
Draft for now as I test performance.