Module core::arch::wasm321.33.0[][src]

This is supported on WebAssembly only.
Expand description

Platform-specific intrinsics for the wasm32 platform.

This module provides intrinsics specific to the WebAssembly architecture. Here you’ll find intrinsics specific to WebAssembly that aren’t otherwise surfaced somewhere in a cross-platform abstraction of std, and you’ll also find functions for leveraging WebAssembly proposals such as atomics and simd.

Intrinsics in the wasm32 module are modeled after the WebAssembly instructions that they represent. Most functions are named after the instruction they intend to correspond to, and the arguments/results correspond to the type signature of the instruction itself. Stable WebAssembly instructions are documented online.

If a proposal is not yet stable in WebAssembly itself then the functions within this function may be unstable and require the nightly channel of Rust to use. As the proposal itself stabilizes the intrinsics in this module should stabilize as well.

See the module documentation for general information about the arch module and platform intrinsics.

Atomics

The threads proposal for WebAssembly adds a number of instructions for dealing with multithreaded programs. Most instructions added in the atomics proposal are exposed in Rust through the std::sync::atomic module. Some instructions, however, don’t have direct equivalents in Rust so they’re exposed here instead.

Note that the instructions added in the atomics proposal can work in either a context with a shared wasm memory and without. These intrinsics are always available in the standard library, but you likely won’t be able to use them too productively unless you recompile the standard library (and all your code) with -Ctarget-feature=+atomics.

It’s also worth pointing out that multi-threaded WebAssembly and its story in Rust is still in a somewhat “early days” phase as of the time of this writing. Pieces should mostly work but it generally requires a good deal of manual setup. At this time it’s not as simple as “just call std::thread::spawn”, but it will hopefully get there one day!

SIMD

The simd proposal for WebAssembly added a new v128 type for a 128-bit SIMD register. It also added a large array of instructions to operate on the v128 type to perform data processing. Using SIMD on wasm is intended to be similar to as you would on x86_64, for example. You’d write a function such as:

#[cfg(target_arch = "wasm32")]
#[target_feature(enable = "simd128")]
unsafe fn uses_simd() {
    use std::arch::wasm32::*;
    // ...
}
Run

Unlike x86_64, however, WebAssembly does not currently have dynamic detection at runtime as to whether SIMD is supported (this is one of the motivators for the conditional sections and feature detection proposals, but that is still pretty early days). This means that your binary will either have SIMD and can only run on engines which support SIMD, or it will not have SIMD at all. For compatibility the standard library itself does not use any SIMD internally. Determining how best to ship your WebAssembly binary with SIMD is largely left up to you as it can can be pretty nuanced depending on your situation.

To enable SIMD support at compile time you need to do one of two things:

  • First you can annotate functions with #[target_feature(enable = "simd128")]. This causes just that one function to have SIMD support available to it, and intrinsics will get inlined as usual in this situation.

  • Second you can compile your program with -Ctarget-feature=+simd128. This compilation flag blanket enables SIMD support for your entire compilation. Note that this does not include the standard library unless you recompile the standard library.

If you enable SIMD via either of these routes then you’ll have a WebAssembly binary that uses SIMD instructions, and you’ll need to ship that accordingly. Also note that if you call SIMD intrinsics but don’t enable SIMD via either of these mechanisms, you’ll still have SIMD generated in your program. This means to generate a binary without SIMD you’ll need to avoid both options above plus calling into any intrinsics in this module.

Structs

v128

WASM-specific 128-bit wide SIMD vector type.

Functions

memory_atomic_notifyExperimentalatomics

Corresponding intrinsic to wasm’s memory.atomic.notify instruction

memory_atomic_wait32Experimentalatomics

Corresponding intrinsic to wasm’s memory.atomic.wait32 instruction

memory_atomic_wait64Experimentalatomics

Corresponding intrinsic to wasm’s memory.atomic.wait64 instruction

f32x4simd128

Materializes a SIMD value from the provided operands.

f32x4_abssimd128

Calculates the absolute value of each lane of a 128-bit vector interpreted as four 32-bit floating point numbers.

f32x4_addsimd128

Adds pairwise lanes of two 128-bit vectors interpreted as four 32-bit floating point numbers.

f32x4_ceilsimd128

Lane-wise rounding to the nearest integral value not smaller than the input.

f32x4_convert_i32x4simd128

Converts a 128-bit vector interpreted as four 32-bit signed integers into a 128-bit vector of four 32-bit floating point numbers.

f32x4_convert_u32x4simd128

Converts a 128-bit vector interpreted as four 32-bit unsigned integers into a 128-bit vector of four 32-bit floating point numbers.

f32x4_demote_f64x2_zerosimd128

Conversion of the two double-precision floating point lanes to two lower single-precision lanes of the result. The two higher lanes of the result are initialized to zero. If the conversion result is not representable as a single-precision floating point number, it is rounded to the nearest-even representable number.

f32x4_divsimd128

Divides pairwise lanes of two 128-bit vectors interpreted as four 32-bit floating point numbers.

f32x4_eqsimd128

Compares two 128-bit vectors as if they were two vectors of 4 thirty-two-bit floating point numbers.

f32x4_extract_lanesimd128

Extracts a lane from a 128-bit vector interpreted as 4 packed f32 numbers.

f32x4_floorsimd128

Lane-wise rounding to the nearest integral value not greater than the input.

f32x4_gesimd128

Compares two 128-bit vectors as if they were two vectors of 4 thirty-two-bit floating point numbers.

f32x4_gtsimd128

Compares two 128-bit vectors as if they were two vectors of 4 thirty-two-bit floating point numbers.

f32x4_lesimd128

Compares two 128-bit vectors as if they were two vectors of 4 thirty-two-bit floating point numbers.

f32x4_ltsimd128

Compares two 128-bit vectors as if they were two vectors of 4 thirty-two-bit floating point numbers.

f32x4_maxsimd128

Calculates the maximum of pairwise lanes of two 128-bit vectors interpreted as four 32-bit floating point numbers.

f32x4_minsimd128

Calculates the minimum of pairwise lanes of two 128-bit vectors interpreted as four 32-bit floating point numbers.

f32x4_mulsimd128

Multiplies pairwise lanes of two 128-bit vectors interpreted as four 32-bit floating point numbers.

f32x4_nesimd128

Compares two 128-bit vectors as if they were two vectors of 4 thirty-two-bit floating point numbers.

f32x4_nearestsimd128

Lane-wise rounding to the nearest integral value; if two values are equally near, rounds to the even one.

f32x4_negsimd128

Negates each lane of a 128-bit vector interpreted as four 32-bit floating point numbers.

f32x4_pmaxsimd128

Lane-wise maximum value, defined as a < b ? b : a

f32x4_pminsimd128

Lane-wise minimum value, defined as b < a ? b : a

f32x4_replace_lanesimd128

Replaces a lane from a 128-bit vector interpreted as 4 packed f32 numbers.

f32x4_splatsimd128

Creates a vector with identical lanes.

f32x4_sqrtsimd128

Calculates the square root of each lane of a 128-bit vector interpreted as four 32-bit floating point numbers.

f32x4_subsimd128

Subtracts pairwise lanes of two 128-bit vectors interpreted as four 32-bit floating point numbers.

f32x4_truncsimd128

Lane-wise rounding to the nearest integral value with the magnitude not larger than the input.

f64x2simd128

Materializes a SIMD value from the provided operands.

f64x2_abssimd128

Calculates the absolute value of each lane of a 128-bit vector interpreted as two 64-bit floating point numbers.

f64x2_addsimd128

Adds pairwise lanes of two 128-bit vectors interpreted as two 64-bit floating point numbers.

f64x2_ceilsimd128

Lane-wise rounding to the nearest integral value not smaller than the input.

f64x2_convert_low_i32x4simd128

Lane-wise conversion from integer to floating point.

f64x2_convert_low_u32x4simd128

Lane-wise conversion from integer to floating point.

f64x2_divsimd128

Divides pairwise lanes of two 128-bit vectors interpreted as two 64-bit floating point numbers.

f64x2_eqsimd128

Compares two 128-bit vectors as if they were two vectors of 2 sixty-four-bit floating point numbers.

f64x2_extract_lanesimd128

Extracts a lane from a 128-bit vector interpreted as 2 packed f64 numbers.

f64x2_floorsimd128

Lane-wise rounding to the nearest integral value not greater than the input.

f64x2_gesimd128

Compares two 128-bit vectors as if they were two vectors of 2 sixty-four-bit floating point numbers.

f64x2_gtsimd128

Compares two 128-bit vectors as if they were two vectors of 2 sixty-four-bit floating point numbers.

f64x2_lesimd128

Compares two 128-bit vectors as if they were two vectors of 2 sixty-four-bit floating point numbers.

f64x2_ltsimd128

Compares two 128-bit vectors as if they were two vectors of 2 sixty-four-bit floating point numbers.

f64x2_maxsimd128

Calculates the maximum of pairwise lanes of two 128-bit vectors interpreted as two 64-bit floating point numbers.

f64x2_minsimd128

Calculates the minimum of pairwise lanes of two 128-bit vectors interpreted as two 64-bit floating point numbers.

f64x2_mulsimd128

Multiplies pairwise lanes of two 128-bit vectors interpreted as two 64-bit floating point numbers.

f64x2_nesimd128

Compares two 128-bit vectors as if they were two vectors of 2 sixty-four-bit floating point numbers.

f64x2_nearestsimd128

Lane-wise rounding to the nearest integral value; if two values are equally near, rounds to the even one.

f64x2_negsimd128

Negates each lane of a 128-bit vector interpreted as two 64-bit floating point numbers.

f64x2_pmaxsimd128

Lane-wise maximum value, defined as a < b ? b : a

f64x2_pminsimd128

Lane-wise minimum value, defined as b < a ? b : a

f64x2_promote_low_f32x4simd128

Conversion of the two lower single-precision floating point lanes to the two double-precision lanes of the result.

f64x2_replace_lanesimd128

Replaces a lane from a 128-bit vector interpreted as 2 packed f64 numbers.

f64x2_splatsimd128

Creates a vector with identical lanes.

f64x2_sqrtsimd128

Calculates the square root of each lane of a 128-bit vector interpreted as two 64-bit floating point numbers.

f64x2_subsimd128

Subtracts pairwise lanes of two 128-bit vectors interpreted as two 64-bit floating point numbers.

f64x2_truncsimd128

Lane-wise rounding to the nearest integral value with the magnitude not larger than the input.

i8x16simd128

Materializes a SIMD value from the provided operands.

i8x16_abssimd128

Lane-wise wrapping absolute value.

i8x16_addsimd128

Adds two 128-bit vectors as if they were two packed sixteen 8-bit integers.

i8x16_add_satsimd128

Adds two 128-bit vectors as if they were two packed sixteen 8-bit signed integers, saturating on overflow to i8::MAX.

i8x16_all_truesimd128

Returns true if all lanes are nonzero or false if any lane is nonzero.

i8x16_bitmasksimd128

Extracts the high bit for each lane in a and produce a scalar mask with all bits concatenated.

i8x16_eqsimd128

Compares two 128-bit vectors as if they were two vectors of 16 eight-bit integers.

i8x16_extract_lanesimd128

Extracts a lane from a 128-bit vector interpreted as 16 packed i8 numbers.

i8x16_gesimd128

Compares two 128-bit vectors as if they were two vectors of 16 eight-bit signed integers.

i8x16_gtsimd128

Compares two 128-bit vectors as if they were two vectors of 16 eight-bit signed integers.

i8x16_lesimd128

Compares two 128-bit vectors as if they were two vectors of 16 eight-bit signed integers.

i8x16_ltsimd128

Compares two 128-bit vectors as if they were two vectors of 16 eight-bit signed integers.

i8x16_maxsimd128

Compares lane-wise signed integers, and returns the maximum of each pair.

i8x16_minsimd128

Compares lane-wise signed integers, and returns the minimum of each pair.

i8x16_narrow_i16x8simd128

Converts two input vectors into a smaller lane vector by narrowing each lane.

i8x16_nesimd128

Compares two 128-bit vectors as if they were two vectors of 16 eight-bit integers.

i8x16_negsimd128

Negates a 128-bit vectors intepreted as sixteen 8-bit signed integers

i8x16_popcntsimd128

Count the number of bits set to one within each lane.

i8x16_replace_lanesimd128

Replaces a lane from a 128-bit vector interpreted as 16 packed i8 numbers.

i8x16_shlsimd128

Shifts each lane to the left by the specified number of bits.

i8x16_shrsimd128

Shifts each lane to the right by the specified number of bits, sign extending.

i8x16_shufflesimd128

Returns a new vector with lanes selected from the lanes of the two input vectors $a and $b specified in the 16 immediate operands.

i8x16_splatsimd128

Creates a vector with identical lanes.

i8x16_subsimd128

Subtracts two 128-bit vectors as if they were two packed sixteen 8-bit integers.

i8x16_sub_satsimd128

Subtracts two 128-bit vectors as if they were two packed sixteen 8-bit signed integers, saturating on overflow to i8::MIN.

i8x16_swizzlesimd128

Returns a new vector with lanes selected from the lanes of the first input vector a specified in the second input vector s.

i16x8simd128

Materializes a SIMD value from the provided operands.

i16x8_abssimd128

Lane-wise wrapping absolute value.

i16x8_addsimd128

Adds two 128-bit vectors as if they were two packed eight 16-bit integers.

i16x8_add_satsimd128

Adds two 128-bit vectors as if they were two packed eight 16-bit signed integers, saturating on overflow to i16::MAX.

i16x8_all_truesimd128

Returns 1 if all lanes are nonzero or 0 if any lane is nonzero.

i16x8_bitmasksimd128

Extracts the high bit for each lane in a and produce a scalar mask with all bits concatenated.

i16x8_eqsimd128

Compares two 128-bit vectors as if they were two vectors of 8 sixteen-bit integers.

i16x8_extadd_pairwise_i8x16simd128

Lane-wise integer extended pairwise addition producing extended results (twice wider results than the inputs).

i16x8_extadd_pairwise_u8x16simd128

Lane-wise integer extended pairwise addition producing extended results (twice wider results than the inputs).

i16x8_extend_high_i8x16simd128

Converts high half of the smaller lane vector to a larger lane vector, sign extended.

i16x8_extend_high_u8x16simd128

Converts high half of the smaller lane vector to a larger lane vector, zero extended.

i16x8_extend_low_i8x16simd128

Converts low half of the smaller lane vector to a larger lane vector, sign extended.

i16x8_extend_low_u8x16simd128

Converts low half of the smaller lane vector to a larger lane vector, zero extended.

i16x8_extmul_high_i8x16simd128

Lane-wise integer extended multiplication producing twice wider result than the inputs.

i16x8_extmul_high_u8x16simd128

Lane-wise integer extended multiplication producing twice wider result than the inputs.

i16x8_extmul_low_i8x16simd128

Lane-wise integer extended multiplication producing twice wider result than the inputs.

i16x8_extmul_low_u8x16simd128

Lane-wise integer extended multiplication producing twice wider result than the inputs.

i16x8_extract_lanesimd128

Extracts a lane from a 128-bit vector interpreted as 8 packed i16 numbers.

i16x8_gesimd128

Compares two 128-bit vectors as if they were two vectors of 8 sixteen-bit signed integers.

i16x8_gtsimd128

Compares two 128-bit vectors as if they were two vectors of 8 sixteen-bit signed integers.

i16x8_lesimd128

Compares two 128-bit vectors as if they were two vectors of 8 sixteen-bit signed integers.

i16x8_load_extend_i8x8simd128

Load eight 8-bit integers and sign extend each one to a 16-bit lane

i16x8_load_extend_u8x8simd128

Load eight 8-bit integers and zero extend each one to a 16-bit lane

i16x8_ltsimd128

Compares two 128-bit vectors as if they were two vectors of 8 sixteen-bit signed integers.

i16x8_maxsimd128

Compares lane-wise signed integers, and returns the maximum of each pair.

i16x8_minsimd128

Compares lane-wise signed integers, and returns the minimum of each pair.

i16x8_mulsimd128

Multiplies two 128-bit vectors as if they were two packed eight 16-bit signed integers.

i16x8_narrow_i32x4simd128

Converts two input vectors into a smaller lane vector by narrowing each lane.

i16x8_nesimd128

Compares two 128-bit vectors as if they were two vectors of 8 sixteen-bit integers.

i16x8_negsimd128

Negates a 128-bit vectors intepreted as eight 16-bit signed integers

i16x8_q15mulr_satsimd128

Lane-wise saturating rounding multiplication in Q15 format.

i16x8_replace_lanesimd128

Replaces a lane from a 128-bit vector interpreted as 8 packed i16 numbers.

i16x8_shlsimd128

Shifts each lane to the left by the specified number of bits.

i16x8_shrsimd128

Shifts each lane to the right by the specified number of bits, sign extending.

i16x8_shufflesimd128

Same as i8x16_shuffle, except operates as if the inputs were eight 16-bit integers, only taking 8 indices to shuffle.

i16x8_splatsimd128

Creates a vector with identical lanes.

i16x8_subsimd128

Subtracts two 128-bit vectors as if they were two packed eight 16-bit integers.

i16x8_sub_satsimd128

Subtracts two 128-bit vectors as if they were two packed eight 16-bit signed integers, saturating on overflow to i16::MIN.

i32x4simd128

Materializes a SIMD value from the provided operands.

i32x4_abssimd128

Lane-wise wrapping absolute value.

i32x4_addsimd128

Adds two 128-bit vectors as if they were two packed four 32-bit integers.

i32x4_all_truesimd128

Returns 1 if all lanes are nonzero or 0 if any lane is nonzero.

i32x4_bitmasksimd128

Extracts the high bit for each lane in a and produce a scalar mask with all bits concatenated.

i32x4_dot_i16x8simd128

Lane-wise multiply signed 16-bit integers in the two input vectors and add adjacent pairs of the full 32-bit results.

i32x4_eqsimd128

Compares two 128-bit vectors as if they were two vectors of 4 thirty-two-bit integers.

i32x4_extadd_pairwise_i16x8simd128

Lane-wise integer extended pairwise addition producing extended results (twice wider results than the inputs).

i32x4_extadd_pairwise_u16x8simd128

Lane-wise integer extended pairwise addition producing extended results (twice wider results than the inputs).

i32x4_extend_high_i16x8simd128

Converts high half of the smaller lane vector to a larger lane vector, sign extended.

i32x4_extend_high_u16x8simd128

Converts high half of the smaller lane vector to a larger lane vector, zero extended.

i32x4_extend_low_i16x8simd128

Converts low half of the smaller lane vector to a larger lane vector, sign extended.

i32x4_extend_low_u16x8simd128

Converts low half of the smaller lane vector to a larger lane vector, zero extended.

i32x4_extmul_high_i16x8simd128

Lane-wise integer extended multiplication producing twice wider result than the inputs.

i32x4_extmul_high_u16x8simd128

Lane-wise integer extended multiplication producing twice wider result than the inputs.

i32x4_extmul_low_i16x8simd128

Lane-wise integer extended multiplication producing twice wider result than the inputs.

i32x4_extmul_low_u16x8simd128

Lane-wise integer extended multiplication producing twice wider result than the inputs.

i32x4_extract_lanesimd128

Extracts a lane from a 128-bit vector interpreted as 4 packed i32 numbers.

i32x4_gesimd128

Compares two 128-bit vectors as if they were two vectors of 4 thirty-two-bit signed integers.

i32x4_gtsimd128

Compares two 128-bit vectors as if they were two vectors of 4 thirty-two-bit signed integers.

i32x4_lesimd128

Compares two 128-bit vectors as if they were two vectors of 4 thirty-two-bit signed integers.

i32x4_load_extend_i16x4simd128

Load four 16-bit integers and sign extend each one to a 32-bit lane

i32x4_load_extend_u16x4simd128

Load four 16-bit integers and zero extend each one to a 32-bit lane

i32x4_ltsimd128

Compares two 128-bit vectors as if they were two vectors of 4 thirty-two-bit signed integers.

i32x4_maxsimd128

Compares lane-wise signed integers, and returns the maximum of each pair.

i32x4_minsimd128

Compares lane-wise signed integers, and returns the minimum of each pair.

i32x4_mulsimd128

Multiplies two 128-bit vectors as if they were two packed four 32-bit signed integers.

i32x4_nesimd128

Compares two 128-bit vectors as if they were two vectors of 4 thirty-two-bit integers.

i32x4_negsimd128

Negates a 128-bit vectors intepreted as four 32-bit signed integers

i32x4_replace_lanesimd128

Replaces a lane from a 128-bit vector interpreted as 4 packed i32 numbers.

i32x4_shlsimd128

Shifts each lane to the left by the specified number of bits.

i32x4_shrsimd128

Shifts each lane to the right by the specified number of bits, sign extending.

i32x4_shufflesimd128

Same as i8x16_shuffle, except operates as if the inputs were four 32-bit integers, only taking 4 indices to shuffle.

i32x4_splatsimd128

Creates a vector with identical lanes.

i32x4_subsimd128

Subtracts two 128-bit vectors as if they were two packed four 32-bit integers.

i32x4_trunc_sat_f32x4simd128

Converts a 128-bit vector interpreted as four 32-bit floating point numbers into a 128-bit vector of four 32-bit signed integers.

i32x4_trunc_sat_f64x2_zerosimd128

Saturating conversion of the two double-precision floating point lanes to two lower integer lanes using the IEEE convertToIntegerTowardZero function.

i64x2simd128

Materializes a SIMD value from the provided operands.

i64x2_abssimd128

Lane-wise wrapping absolute value.

i64x2_addsimd128

Adds two 128-bit vectors as if they were two packed two 64-bit integers.

i64x2_all_truesimd128

Returns 1 if all lanes are nonzero or 0 if any lane is nonzero.

i64x2_bitmasksimd128

Extracts the high bit for each lane in a and produce a scalar mask with all bits concatenated.

i64x2_eqsimd128

Compares two 128-bit vectors as if they were two vectors of 2 sixty-four-bit integers.

i64x2_extend_high_i32x4simd128

Converts high half of the smaller lane vector to a larger lane vector, sign extended.

i64x2_extend_high_u32x4simd128

Converts high half of the smaller lane vector to a larger lane vector, zero extended.

i64x2_extend_low_i32x4simd128

Converts low half of the smaller lane vector to a larger lane vector, sign extended.

i64x2_extend_low_u32x4simd128

Converts low half of the smaller lane vector to a larger lane vector, zero extended.

i64x2_extmul_high_i32x4simd128

Lane-wise integer extended multiplication producing twice wider result than the inputs.

i64x2_extmul_high_u32x4simd128

Lane-wise integer extended multiplication producing twice wider result than the inputs.

i64x2_extmul_low_i32x4simd128

Lane-wise integer extended multiplication producing twice wider result than the inputs.

i64x2_extmul_low_u32x4simd128

Lane-wise integer extended multiplication producing twice wider result than the inputs.

i64x2_extract_lanesimd128

Extracts a lane from a 128-bit vector interpreted as 2 packed i64 numbers.

i64x2_gesimd128

Compares two 128-bit vectors as if they were two vectors of 2 sixty-four-bit signed integers.

i64x2_gtsimd128

Compares two 128-bit vectors as if they were two vectors of 2 sixty-four-bit signed integers.

i64x2_lesimd128

Compares two 128-bit vectors as if they were two vectors of 2 sixty-four-bit signed integers.

i64x2_load_extend_i32x2simd128

Load two 32-bit integers and sign extend each one to a 64-bit lane

i64x2_load_extend_u32x2simd128

Load two 32-bit integers and zero extend each one to a 64-bit lane

i64x2_ltsimd128

Compares two 128-bit vectors as if they were two vectors of 2 sixty-four-bit signed integers.

i64x2_mulsimd128

Multiplies two 128-bit vectors as if they were two packed two 64-bit integers.

i64x2_nesimd128

Compares two 128-bit vectors as if they were two vectors of 2 sixty-four-bit integers.

i64x2_negsimd128

Negates a 128-bit vectors intepreted as two 64-bit signed integers

i64x2_replace_lanesimd128

Replaces a lane from a 128-bit vector interpreted as 2 packed i64 numbers.

i64x2_shlsimd128

Shifts each lane to the left by the specified number of bits.

i64x2_shrsimd128

Shifts each lane to the right by the specified number of bits, sign extending.

i64x2_shufflesimd128

Same as i8x16_shuffle, except operates as if the inputs were two 64-bit integers, only taking 2 indices to shuffle.

i64x2_splatsimd128

Creates a vector with identical lanes.

i64x2_subsimd128

Subtracts two 128-bit vectors as if they were two packed two 64-bit integers.

memory_grow

Corresponding intrinsic to wasm’s memory.grow instruction

memory_size

Corresponding intrinsic to wasm’s memory.size instruction

u8x16simd128

Materializes a SIMD value from the provided operands.

u8x16_addsimd128

Adds two 128-bit vectors as if they were two packed sixteen 8-bit integers.

u8x16_add_satsimd128

Adds two 128-bit vectors as if they were two packed sixteen 8-bit unsigned integers, saturating on overflow to u8::MAX.

u8x16_all_truesimd128

Returns true if all lanes are nonzero or false if any lane is nonzero.

u8x16_avgrsimd128

Lane-wise rounding average.

u8x16_bitmasksimd128

Extracts the high bit for each lane in a and produce a scalar mask with all bits concatenated.

u8x16_eqsimd128

Compares two 128-bit vectors as if they were two vectors of 16 eight-bit integers.

u8x16_extract_lanesimd128

Extracts a lane from a 128-bit vector interpreted as 16 packed u8 numbers.

u8x16_gesimd128

Compares two 128-bit vectors as if they were two vectors of 16 eight-bit unsigned integers.

u8x16_gtsimd128

Compares two 128-bit vectors as if they were two vectors of 16 eight-bit unsigned integers.

u8x16_lesimd128

Compares two 128-bit vectors as if they were two vectors of 16 eight-bit unsigned integers.

u8x16_ltsimd128

Compares two 128-bit vectors as if they were two vectors of 16 eight-bit unsigned integers.

u8x16_maxsimd128

Compares lane-wise unsigned integers, and returns the maximum of each pair.

u8x16_minsimd128

Compares lane-wise unsigned integers, and returns the minimum of each pair.

u8x16_narrow_i16x8simd128

Converts two input vectors into a smaller lane vector by narrowing each lane.

u8x16_nesimd128

Compares two 128-bit vectors as if they were two vectors of 16 eight-bit integers.

u8x16_popcntsimd128

Count the number of bits set to one within each lane.

u8x16_replace_lanesimd128

Replaces a lane from a 128-bit vector interpreted as 16 packed u8 numbers.

u8x16_shlsimd128

Shifts each lane to the left by the specified number of bits.

u8x16_shrsimd128

Shifts each lane to the right by the specified number of bits, shifting in zeros.

u8x16_shufflesimd128

Returns a new vector with lanes selected from the lanes of the two input vectors $a and $b specified in the 16 immediate operands.

u8x16_splatsimd128

Creates a vector with identical lanes.

u8x16_subsimd128

Subtracts two 128-bit vectors as if they were two packed sixteen 8-bit integers.

u8x16_sub_satsimd128

Subtracts two 128-bit vectors as if they were two packed sixteen 8-bit unsigned integers, saturating on overflow to 0.

u8x16_swizzlesimd128

Returns a new vector with lanes selected from the lanes of the first input vector a specified in the second input vector s.

u16x8simd128

Materializes a SIMD value from the provided operands.

u16x8_addsimd128

Adds two 128-bit vectors as if they were two packed eight 16-bit integers.

u16x8_add_satsimd128

Adds two 128-bit vectors as if they were two packed eight 16-bit unsigned integers, saturating on overflow to u16::MAX.

u16x8_all_truesimd128

Returns 1 if all lanes are nonzero or 0 if any lane is nonzero.

u16x8_avgrsimd128

Lane-wise rounding average.

u16x8_bitmasksimd128

Extracts the high bit for each lane in a and produce a scalar mask with all bits concatenated.

u16x8_eqsimd128

Compares two 128-bit vectors as if they were two vectors of 8 sixteen-bit integers.

u16x8_extadd_pairwise_u8x16simd128

Lane-wise integer extended pairwise addition producing extended results (twice wider results than the inputs).

u16x8_extend_high_u8x16simd128

Converts high half of the smaller lane vector to a larger lane vector, zero extended.

u16x8_extend_low_u8x16simd128

Converts low half of the smaller lane vector to a larger lane vector, zero extended.

u16x8_extmul_high_u8x16simd128

Lane-wise integer extended multiplication producing twice wider result than the inputs.

u16x8_extmul_low_u8x16simd128

Lane-wise integer extended multiplication producing twice wider result than the inputs.

u16x8_extract_lanesimd128

Extracts a lane from a 128-bit vector interpreted as 8 packed u16 numbers.

u16x8_gesimd128

Compares two 128-bit vectors as if they were two vectors of 8 sixteen-bit unsigned integers.

u16x8_gtsimd128

Compares two 128-bit vectors as if they were two vectors of 8 sixteen-bit unsigned integers.

u16x8_lesimd128

Compares two 128-bit vectors as if they were two vectors of 8 sixteen-bit unsigned integers.

u16x8_load_extend_u8x8simd128

Load eight 8-bit integers and zero extend each one to a 16-bit lane

u16x8_ltsimd128

Compares two 128-bit vectors as if they were two vectors of 8 sixteen-bit unsigned integers.

u16x8_maxsimd128

Compares lane-wise unsigned integers, and returns the maximum of each pair.

u16x8_minsimd128

Compares lane-wise unsigned integers, and returns the minimum of each pair.

u16x8_mulsimd128

Multiplies two 128-bit vectors as if they were two packed eight 16-bit signed integers.

u16x8_narrow_i32x4simd128

Converts two input vectors into a smaller lane vector by narrowing each lane.

u16x8_nesimd128

Compares two 128-bit vectors as if they were two vectors of 8 sixteen-bit integers.

u16x8_replace_lanesimd128

Replaces a lane from a 128-bit vector interpreted as 8 packed u16 numbers.

u16x8_shlsimd128

Shifts each lane to the left by the specified number of bits.

u16x8_shrsimd128

Shifts each lane to the right by the specified number of bits, shifting in zeros.

u16x8_shufflesimd128

Same as i8x16_shuffle, except operates as if the inputs were eight 16-bit integers, only taking 8 indices to shuffle.

u16x8_splatsimd128

Creates a vector with identical lanes.

u16x8_subsimd128

Subtracts two 128-bit vectors as if they were two packed eight 16-bit integers.

u16x8_sub_satsimd128

Subtracts two 128-bit vectors as if they were two packed eight 16-bit unsigned integers, saturating on overflow to 0.

u32x4simd128

Materializes a SIMD value from the provided operands.

u32x4_addsimd128

Adds two 128-bit vectors as if they were two packed four 32-bit integers.

u32x4_all_truesimd128

Returns 1 if all lanes are nonzero or 0 if any lane is nonzero.

u32x4_bitmasksimd128

Extracts the high bit for each lane in a and produce a scalar mask with all bits concatenated.

u32x4_eqsimd128

Compares two 128-bit vectors as if they were two vectors of 4 thirty-two-bit integers.

u32x4_extadd_pairwise_u16x8simd128

Lane-wise integer extended pairwise addition producing extended results (twice wider results than the inputs).

u32x4_extend_high_u16x8simd128

Converts high half of the smaller lane vector to a larger lane vector, zero extended.

u32x4_extend_low_u16x8simd128

Converts low half of the smaller lane vector to a larger lane vector, zero extended.

u32x4_extmul_high_u16x8simd128

Lane-wise integer extended multiplication producing twice wider result than the inputs.

u32x4_extmul_low_u16x8simd128

Lane-wise integer extended multiplication producing twice wider result than the inputs.

u32x4_extract_lanesimd128

Extracts a lane from a 128-bit vector interpreted as 4 packed u32 numbers.

u32x4_gesimd128

Compares two 128-bit vectors as if they were two vectors of 4 thirty-two-bit unsigned integers.

u32x4_gtsimd128

Compares two 128-bit vectors as if they were two vectors of 4 thirty-two-bit unsigned integers.

u32x4_lesimd128

Compares two 128-bit vectors as if they were two vectors of 4 thirty-two-bit unsigned integers.

u32x4_load_extend_u16x4simd128

Load four 16-bit integers and zero extend each one to a 32-bit lane

u32x4_ltsimd128

Compares two 128-bit vectors as if they were two vectors of 4 thirty-two-bit unsigned integers.

u32x4_maxsimd128

Compares lane-wise unsigned integers, and returns the maximum of each pair.

u32x4_minsimd128

Compares lane-wise unsigned integers, and returns the minimum of each pair.

u32x4_mulsimd128

Multiplies two 128-bit vectors as if they were two packed four 32-bit signed integers.

u32x4_nesimd128

Compares two 128-bit vectors as if they were two vectors of 4 thirty-two-bit integers.

u32x4_replace_lanesimd128

Replaces a lane from a 128-bit vector interpreted as 4 packed u32 numbers.

u32x4_shlsimd128

Shifts each lane to the left by the specified number of bits.

u32x4_shrsimd128

Shifts each lane to the right by the specified number of bits, shifting in zeros.

u32x4_shufflesimd128

Same as i8x16_shuffle, except operates as if the inputs were four 32-bit integers, only taking 4 indices to shuffle.

u32x4_splatsimd128

Creates a vector with identical lanes.

u32x4_subsimd128

Subtracts two 128-bit vectors as if they were two packed four 32-bit integers.

u32x4_trunc_sat_f32x4simd128

Converts a 128-bit vector interpreted as four 32-bit floating point numbers into a 128-bit vector of four 32-bit unsigned integers.

u32x4_trunc_sat_f64x2_zerosimd128

Saturating conversion of the two double-precision floating point lanes to two lower integer lanes using the IEEE convertToIntegerTowardZero function.

u64x2simd128

Materializes a SIMD value from the provided operands.

u64x2_addsimd128

Adds two 128-bit vectors as if they were two packed two 64-bit integers.

u64x2_all_truesimd128

Returns 1 if all lanes are nonzero or 0 if any lane is nonzero.

u64x2_bitmasksimd128

Extracts the high bit for each lane in a and produce a scalar mask with all bits concatenated.

u64x2_eqsimd128

Compares two 128-bit vectors as if they were two vectors of 2 sixty-four-bit integers.

u64x2_extend_high_u32x4simd128

Converts high half of the smaller lane vector to a larger lane vector, zero extended.

u64x2_extend_low_u32x4simd128

Converts low half of the smaller lane vector to a larger lane vector, zero extended.

u64x2_extmul_high_u32x4simd128

Lane-wise integer extended multiplication producing twice wider result than the inputs.

u64x2_extmul_low_u32x4simd128

Lane-wise integer extended multiplication producing twice wider result than the inputs.

u64x2_extract_lanesimd128

Extracts a lane from a 128-bit vector interpreted as 2 packed u64 numbers.

u64x2_load_extend_u32x2simd128

Load two 32-bit integers and zero extend each one to a 64-bit lane

u64x2_mulsimd128

Multiplies two 128-bit vectors as if they were two packed two 64-bit integers.

u64x2_nesimd128

Compares two 128-bit vectors as if they were two vectors of 2 sixty-four-bit integers.

u64x2_replace_lanesimd128

Replaces a lane from a 128-bit vector interpreted as 2 packed u64 numbers.

u64x2_shlsimd128

Shifts each lane to the left by the specified number of bits.

u64x2_shrsimd128

Shifts each lane to the right by the specified number of bits, shifting in zeros.

u64x2_shufflesimd128

Same as i8x16_shuffle, except operates as if the inputs were two 64-bit integers, only taking 2 indices to shuffle.

u64x2_splatsimd128

Creates a vector with identical lanes.

u64x2_subsimd128

Subtracts two 128-bit vectors as if they were two packed two 64-bit integers.

unreachable

Generates the trap instruction UNREACHABLE

v128_andsimd128

Performs a bitwise and of the two input 128-bit vectors, returning the resulting vector.

v128_andnotsimd128

Bitwise AND of bits of a and the logical inverse of bits of b.

v128_any_truesimd128

Returns true if any bit in a is set, or false otherwise.

v128_bitselectsimd128

Use the bitmask in c to select bits from v1 when 1 and v2 when 0.

v128_loadsimd128

Loads a v128 vector from the given heap address.

v128_load8_lanesimd128

Loads an 8-bit value from m and sets lane L of v to that value.

v128_load8_splatsimd128

Load a single element and splat to all lanes of a v128 vector.

v128_load16_lanesimd128

Loads a 16-bit value from m and sets lane L of v to that value.

v128_load16_splatsimd128

Load a single element and splat to all lanes of a v128 vector.

v128_load32_lanesimd128

Loads a 32-bit value from m and sets lane L of v to that value.

v128_load32_splatsimd128

Load a single element and splat to all lanes of a v128 vector.

v128_load32_zerosimd128

Load a 32-bit element into the low bits of the vector and sets all other bits to zero.

v128_load64_lanesimd128

Loads a 64-bit value from m and sets lane L of v to that value.

v128_load64_splatsimd128

Load a single element and splat to all lanes of a v128 vector.

v128_load64_zerosimd128

Load a 64-bit element into the low bits of the vector and sets all other bits to zero.

v128_notsimd128

Flips each bit of the 128-bit input vector.

v128_orsimd128

Performs a bitwise or of the two input 128-bit vectors, returning the resulting vector.

v128_storesimd128

Stores a v128 vector to the given heap address.

v128_store8_lanesimd128

Stores the 8-bit value from lane L of v into m

v128_store16_lanesimd128

Stores the 16-bit value from lane L of v into m

v128_store32_lanesimd128

Stores the 32-bit value from lane L of v into m

v128_store64_lanesimd128

Stores the 64-bit value from lane L of v into m

v128_xorsimd128

Performs a bitwise xor of the two input 128-bit vectors, returning the resulting vector.