Module core::arch::aarch64 [−][src]
Platform-specific intrinsics for the aarch64 platform.
See the module documentation for more details.
Structs
| float32x2_t |
[ Experimental ] [AArch64 ] ARM-specific 64-bit wide vector of two packed |
| float32x4_t |
[ Experimental ] [AArch64 ] ARM-specific 128-bit wide vector of four packed |
| float64x1_t |
[ Experimental ] [AArch64 ] ARM-specific 64-bit wide vector of one packed |
| float64x2_t |
[ Experimental ] [AArch64 ] ARM-specific 128-bit wide vector of two packed |
| int16x2_t |
[ Experimental ] [AArch64 ] ARM-specific 32-bit wide vector of two packed |
| int16x4_t |
[ Experimental ] [AArch64 ] ARM-specific 64-bit wide vector of four packed |
| int16x8_t |
[ Experimental ] [AArch64 ] ARM-specific 128-bit wide vector of eight packed |
| int32x2_t |
[ Experimental ] [AArch64 ] ARM-specific 64-bit wide vector of two packed |
| int32x4_t |
[ Experimental ] [AArch64 ] ARM-specific 128-bit wide vector of four packed |
| int64x1_t |
[ Experimental ] [AArch64 ] ARM-specific 64-bit wide vector of one packed |
| int64x2_t |
[ Experimental ] [AArch64 ] ARM-specific 128-bit wide vector of two packed |
| int8x16_t |
[ Experimental ] [AArch64 ] ARM-specific 128-bit wide vector of sixteen packed |
| int8x4_t |
[ Experimental ] [AArch64 ] ARM-specific 32-bit wide vector of four packed |
| int8x8_t |
[ Experimental ] [AArch64 ] ARM-specific 64-bit wide vector of eight packed |
| poly16x4_t |
[ Experimental ] [AArch64 ] ARM-specific 64-bit wide vector of four packed |
| poly16x8_t |
[ Experimental ] [AArch64 ] ARM-specific 128-bit wide vector of eight packed |
| poly8x16_t |
[ Experimental ] [AArch64 ] ARM-specific 128-bit wide vector of sixteen packed |
| poly8x8_t |
[ Experimental ] [AArch64 ] ARM-specific 64-bit wide polynomial vector of eight packed |
| uint16x2_t |
[ Experimental ] [AArch64 ] ARM-specific 32-bit wide vector of two packed |
| uint16x4_t |
[ Experimental ] [AArch64 ] ARM-specific 64-bit wide vector of four packed |
| uint16x8_t |
[ Experimental ] [AArch64 ] ARM-specific 128-bit wide vector of eight packed |
| uint32x2_t |
[ Experimental ] [AArch64 ] ARM-specific 64-bit wide vector of two packed |
| uint32x4_t |
[ Experimental ] [AArch64 ] ARM-specific 128-bit wide vector of four packed |
| uint64x1_t |
[ Experimental ] [AArch64 ] ARM-specific 64-bit wide vector of one packed |
| uint64x2_t |
[ Experimental ] [AArch64 ] ARM-specific 128-bit wide vector of two packed |
| uint8x16_t |
[ Experimental ] [AArch64 ] ARM-specific 128-bit wide vector of sixteen packed |
| uint8x4_t |
[ Experimental ] [AArch64 ] ARM-specific 32-bit wide vector of four packed |
| uint8x8_t |
[ Experimental ] [AArch64 ] ARM-specific 64-bit wide vector of eight packed |
Functions
| __DMB⚠ |
[ Experimental ] [AArch64 and ] mclassData Memory Barrier |
| __DSB⚠ |
[ Experimental ] [AArch64 and ] mclassData Synchronization Barrier |
| __ISB⚠ |
[ Experimental ] [AArch64 and ] mclassInstruction Synchronization Barrier |
| __NOP⚠ |
[ Experimental ] [AArch64 and ] mclassNo Operation |
| __SEV⚠ |
[ Experimental ] [AArch64 and ] mclassSend Event |
| __WFE⚠ |
[ Experimental ] [AArch64 and ] mclassWait For Event |
| __WFI⚠ |
[ Experimental ] [AArch64 and ] mclassWait For Interrupt |
| __disable_fault_irq⚠ |
[ Experimental ] [AArch64 and ] mclassDisable FIQ |
| __disable_irq⚠ |
[ Experimental ] [AArch64 and ] mclassDisable IRQ Interrupts |
| __enable_fault_irq⚠ |
[ Experimental ] [AArch64 and ] mclassEnable FIQ |
| __enable_irq⚠ |
[ Experimental ] [AArch64 and ] mclassEnable IRQ Interrupts |
| __get_APSR⚠ |
[ Experimental ] [AArch64 and ] mclassGet APSR Register |
| __get_BASEPRI⚠ |
[ Experimental ] [AArch64 and ] mclassGet Base Priority |
| __get_CONTROL⚠ |
[ Experimental ] [AArch64 and ] mclassGet Control Register |
| __get_FAULTMASK⚠ |
[ Experimental ] [AArch64 and ] mclassGet Fault Mask |
| __get_IPSR⚠ |
[ Experimental ] [AArch64 and ] mclassGet IPSR Register |
| __get_MSP⚠ |
[ Experimental ] [AArch64 and ] mclassGet Main Stack Pointer |
| __get_PRIMASK⚠ |
[ Experimental ] [AArch64 and ] mclassGet Priority Mask |
| __get_PSP⚠ |
[ Experimental ] [AArch64 and ] mclassGet Process Stack Pointer |
| __get_xPSR⚠ |
[ Experimental ] [AArch64 and ] mclassGet xPSR Register |
| __set_BASEPRI⚠ |
[ Experimental ] [AArch64 and ] mclassSet Base Priority |
| __set_BASEPRI_MAX⚠ |
[ Experimental ] [AArch64 and ] mclassSet Base Priority with condition |
| __set_CONTROL⚠ |
[ Experimental ] [AArch64 and ] mclassSet Control Register |
| __set_FAULTMASK⚠ |
[ Experimental ] [AArch64 and ] mclassSet Fault Mask |
| __set_MSP⚠ |
[ Experimental ] [AArch64 and ] mclassSet Main Stack Pointer |
| __set_PRIMASK⚠ |
[ Experimental ] [AArch64 and ] mclassSet Priority Mask |
| __set_PSP⚠ |
[ Experimental ] [AArch64 and ] mclassSet Process Stack Pointer |
| _cls_u32⚠ |
[ Experimental ] [AArch64 ] Counts the leading most significant bits set. |
| _cls_u64⚠ |
[ Experimental ] [AArch64 ] Counts the leading most significant bits set. |
| _clz_u64⚠ |
[ Experimental ] [AArch64 ] Count Leading Zeros. |
| _rbit_u64⚠ |
[ Experimental ] [AArch64 ] Reverse the bit order. |
| _rev_u16⚠ |
[ Experimental ] [AArch64 ] Reverse the order of the bytes. |
| _rev_u32⚠ |
[ Experimental ] [AArch64 ] Reverse the order of the bytes. |
| _rev_u64⚠ |
[ Experimental ] [AArch64 ] Reverse the order of the bytes. |
| qadd⚠ |
[ Experimental ] [AArch64 ] Signed saturating addition |
| qadd8⚠ |
[ Experimental ] [AArch64 ] Saturating four 8-bit integer additions |
| qadd16⚠ |
[ Experimental ] [AArch64 ] Saturating two 16-bit integer additions |
| qasx⚠ |
[ Experimental ] [AArch64 ] Returns the 16-bit signed saturated equivalent of |
| qsax⚠ |
[ Experimental ] [AArch64 ] Returns the 16-bit signed saturated equivalent of |
| qsub⚠ |
[ Experimental ] [AArch64 ] Signed saturating subtraction |
| qsub8⚠ |
[ Experimental ] [AArch64 ] Saturating two 8-bit integer subtraction |
| qsub16⚠ |
[ Experimental ] [AArch64 ] Saturating two 16-bit integer subtraction |
| sadd8⚠ |
[ Experimental ] [AArch64 ] Returns the 8-bit signed saturated equivalent of |
| sadd16⚠ |
[ Experimental ] [AArch64 ] Returns the 16-bit signed saturated equivalent of |
| sasx⚠ |
[ Experimental ] [AArch64 ] Returns the 16-bit signed equivalent of |
| sel⚠ |
[ Experimental ] [AArch64 ] Returns the equivalent of |
| shadd8⚠ |
[ Experimental ] [AArch64 ] Signed halving parallel byte-wise addition. |
| shadd16⚠ |
[ Experimental ] [AArch64 ] Signed halving parallel halfword-wise addition. |
| shsub8⚠ |
[ Experimental ] [AArch64 ] Signed halving parallel byte-wise subtraction. |
| shsub16⚠ |
[ Experimental ] [AArch64 ] Signed halving parallel halfword-wise subtraction. |
| smuad⚠ |
[ Experimental ] [AArch64 ] Signed Dual Multiply Add. |
| smuadx⚠ |
[ Experimental ] [AArch64 ] Signed Dual Multiply Add Reversed. |
| smusd⚠ |
[ Experimental ] [AArch64 ] Signed Dual Multiply Subtract. |
| smusdx⚠ |
[ Experimental ] [AArch64 ] Signed Dual Multiply Subtract Reversed. |
| vadd_f32⚠ |
[ Experimental ] [AArch64 and ] neonVector add. |
| vadd_f64⚠ |
[ Experimental ] [AArch64 and ] neonVector add. |
| vadd_s8⚠ |
[ Experimental ] [AArch64 and ] neonVector add. |
| vadd_s16⚠ |
[ Experimental ] [AArch64 and ] neonVector add. |
| vadd_s32⚠ |
[ Experimental ] [AArch64 and ] neonVector add. |
| vadd_u8⚠ |
[ Experimental ] [AArch64 and ] neonVector add. |
| vadd_u16⚠ |
[ Experimental ] [AArch64 and ] neonVector add. |
| vadd_u32⚠ |
[ Experimental ] [AArch64 and ] neonVector add. |
| vaddd_s64⚠ |
[ Experimental ] [AArch64 and ] neonVector add. |
| vaddd_u64⚠ |
[ Experimental ] [AArch64 and ] neonVector add. |
| vaddl_s8⚠ |
[ Experimental ] [AArch64 and ] neonVector long add. |
| vaddl_s16⚠ |
[ Experimental ] [AArch64 and ] neonVector long add. |
| vaddl_s32⚠ |
[ Experimental ] [AArch64 and ] neonVector long add. |
| vaddl_u8⚠ |
[ Experimental ] [AArch64 and ] neonVector long add. |
| vaddl_u16⚠ |
[ Experimental ] [AArch64 and ] neonVector long add. |
| vaddl_u32⚠ |
[ Experimental ] [AArch64 and ] neonVector long add. |
| vaddq_f32⚠ |
[ Experimental ] [AArch64 and ] neonVector add. |
| vaddq_f64⚠ |
[ Experimental ] [AArch64 and ] neonVector add. |
| vaddq_s8⚠ |
[ Experimental ] [AArch64 and ] neonVector add. |
| vaddq_s16⚠ |
[ Experimental ] [AArch64 and ] neonVector add. |
| vaddq_s32⚠ |
[ Experimental ] [AArch64 and ] neonVector add. |
| vaddq_s64⚠ |
[ Experimental ] [AArch64 and ] neonVector add. |
| vaddq_u8⚠ |
[ Experimental ] [AArch64 and ] neonVector add. |
| vaddq_u16⚠ |
[ Experimental ] [AArch64 and ] neonVector add. |
| vaddq_u32⚠ |
[ Experimental ] [AArch64 and ] neonVector add. |
| vaddq_u64⚠ |
[ Experimental ] [AArch64 and ] neonVector add. |
| vaesdq_u8⚠ |
[ Experimental ] [AArch64 and ] cryptoAES single round decryption. |
| vaeseq_u8⚠ |
[ Experimental ] [AArch64 and ] cryptoAES single round encryption. |
| vaesimcq_u8⚠ |
[ Experimental ] [AArch64 and ] cryptoAES inverse mix columns. |
| vaesmcq_u8⚠ |
[ Experimental ] [AArch64 and ] cryptoAES mix columns. |
| vmaxv_f32⚠ |
[ Experimental ] [AArch64 and ] neonHorizontal vector max. |
| vmaxv_s8⚠ |
[ Experimental ] [AArch64 and ] neonHorizontal vector max. |
| vmaxv_s16⚠ |
[ Experimental ] [AArch64 and ] neonHorizontal vector max. |
| vmaxv_s32⚠ |
[ Experimental ] [AArch64 and ] neonHorizontal vector max. |
| vmaxv_u8⚠ |
[ Experimental ] [AArch64 and ] neonHorizontal vector max. |
| vmaxv_u16⚠ |
[ Experimental ] [AArch64 and ] neonHorizontal vector max. |
| vmaxv_u32⚠ |
[ Experimental ] [AArch64 and ] neonHorizontal vector max. |
| vmaxvq_f32⚠ |
[ Experimental ] [AArch64 and ] neonHorizontal vector max. |
| vmaxvq_f64⚠ |
[ Experimental ] [AArch64 and ] neonHorizontal vector max. |
| vmaxvq_s8⚠ |
[ Experimental ] [AArch64 and ] neonHorizontal vector max. |
| vmaxvq_s16⚠ |
[ Experimental ] [AArch64 and ] neonHorizontal vector max. |
| vmaxvq_s32⚠ |
[ Experimental ] [AArch64 and ] neonHorizontal vector max. |
| vmaxvq_u8⚠ |
[ Experimental ] [AArch64 and ] neonHorizontal vector max. |
| vmaxvq_u16⚠ |
[ Experimental ] [AArch64 and ] neonHorizontal vector max. |
| vmaxvq_u32⚠ |
[ Experimental ] [AArch64 and ] neonHorizontal vector max. |
| vminv_f32⚠ |
[ Experimental ] [AArch64 and ] neonHorizontal vector min. |
| vminv_s8⚠ |
[ Experimental ] [AArch64 and ] neonHorizontal vector min. |
| vminv_s16⚠ |
[ Experimental ] [AArch64 and ] neonHorizontal vector min. |
| vminv_s32⚠ |
[ Experimental ] [AArch64 and ] neonHorizontal vector min. |
| vminv_u8⚠ |
[ Experimental ] [AArch64 and ] neonHorizontal vector min. |
| vminv_u16⚠ |
[ Experimental ] [AArch64 and ] neonHorizontal vector min. |
| vminv_u32⚠ |
[ Experimental ] [AArch64 and ] neonHorizontal vector min. |
| vminvq_f32⚠ |
[ Experimental ] [AArch64 and ] neonHorizontal vector min. |
| vminvq_f64⚠ |
[ Experimental ] [AArch64 and ] neonHorizontal vector min. |
| vminvq_s8⚠ |
[ Experimental ] [AArch64 and ] neonHorizontal vector min. |
| vminvq_s16⚠ |
[ Experimental ] [AArch64 and ] neonHorizontal vector min. |
| vminvq_s32⚠ |
[ Experimental ] [AArch64 and ] neonHorizontal vector min. |
| vminvq_u8⚠ |
[ Experimental ] [AArch64 and ] neonHorizontal vector min. |
| vminvq_u16⚠ |
[ Experimental ] [AArch64 and ] neonHorizontal vector min. |
| vminvq_u32⚠ |
[ Experimental ] [AArch64 and ] neonHorizontal vector min. |
| vmovl_s8⚠ |
[ Experimental ] [AArch64 and ] neonVector long move. |
| vmovl_s16⚠ |
[ Experimental ] [AArch64 and ] neonVector long move. |
| vmovl_s32⚠ |
[ Experimental ] [AArch64 and ] neonVector long move. |
| vmovl_u8⚠ |
[ Experimental ] [AArch64 and ] neonVector long move. |
| vmovl_u16⚠ |
[ Experimental ] [AArch64 and ] neonVector long move. |
| vmovl_u32⚠ |
[ Experimental ] [AArch64 and ] neonVector long move. |
| vmovn_s16⚠ |
[ Experimental ] [AArch64 and ] neonVector narrow integer. |
| vmovn_s32⚠ |
[ Experimental ] [AArch64 and ] neonVector narrow integer. |
| vmovn_s64⚠ |
[ Experimental ] [AArch64 and ] neonVector narrow integer. |
| vmovn_u16⚠ |
[ Experimental ] [AArch64 and ] neonVector narrow integer. |
| vmovn_u32⚠ |
[ Experimental ] [AArch64 and ] neonVector narrow integer. |
| vmovn_u64⚠ |
[ Experimental ] [AArch64 and ] neonVector narrow integer. |
| vpmax_f32⚠ |
[ Experimental ] [AArch64 and ] neonFolding maximum of adjacent pairs |
| vpmax_s8⚠ |
[ Experimental ] [AArch64 and ] neonFolding maximum of adjacent pairs |
| vpmax_s16⚠ |
[ Experimental ] [AArch64 and ] neonFolding maximum of adjacent pairs |
| vpmax_s32⚠ |
[ Experimental ] [AArch64 and ] neonFolding maximum of adjacent pairs |
| vpmax_u8⚠ |
[ Experimental ] [AArch64 and ] neonFolding maximum of adjacent pairs |
| vpmax_u16⚠ |
[ Experimental ] [AArch64 and ] neonFolding maximum of adjacent pairs |
| vpmax_u32⚠ |
[ Experimental ] [AArch64 and ] neonFolding maximum of adjacent pairs |
| vpmaxq_f32⚠ |
[ Experimental ] [AArch64 and ] neonFolding maximum of adjacent pairs |
| vpmaxq_f64⚠ |
[ Experimental ] [AArch64 and ] neonFolding maximum of adjacent pairs |
| vpmaxq_s8⚠ |
[ Experimental ] [AArch64 and ] neonFolding maximum of adjacent pairs |
| vpmaxq_s16⚠ |
[ Experimental ] [AArch64 and ] neonFolding maximum of adjacent pairs |
| vpmaxq_s32⚠ |
[ Experimental ] [AArch64 and ] neonFolding maximum of adjacent pairs |
| vpmaxq_u8⚠ |
[ Experimental ] [AArch64 and ] neonFolding maximum of adjacent pairs |
| vpmaxq_u16⚠ |
[ Experimental ] [AArch64 and ] neonFolding maximum of adjacent pairs |
| vpmaxq_u32⚠ |
[ Experimental ] [AArch64 and ] neonFolding maximum of adjacent pairs |
| vpmin_f32⚠ |
[ Experimental ] [AArch64 and ] neonFolding minimum of adjacent pairs |
| vpmin_s8⚠ |
[ Experimental ] [AArch64 and ] neonFolding minimum of adjacent pairs |
| vpmin_s16⚠ |
[ Experimental ] [AArch64 and ] neonFolding minimum of adjacent pairs |
| vpmin_s32⚠ |
[ Experimental ] [AArch64 and ] neonFolding minimum of adjacent pairs |
| vpmin_u8⚠ |
[ Experimental ] [AArch64 and ] neonFolding minimum of adjacent pairs |
| vpmin_u16⚠ |
[ Experimental ] [AArch64 and ] neonFolding minimum of adjacent pairs |
| vpmin_u32⚠ |
[ Experimental ] [AArch64 and ] neonFolding minimum of adjacent pairs |
| vpminq_f32⚠ |
[ Experimental ] [AArch64 and ] neonFolding minimum of adjacent pairs |
| vpminq_f64⚠ |
[ Experimental ] [AArch64 and ] neonFolding minimum of adjacent pairs |
| vpminq_s8⚠ |
[ Experimental ] [AArch64 and ] neonFolding minimum of adjacent pairs |
| vpminq_s16⚠ |
[ Experimental ] [AArch64 and ] neonFolding minimum of adjacent pairs |
| vpminq_s32⚠ |
[ Experimental ] [AArch64 and ] neonFolding minimum of adjacent pairs |
| vpminq_u8⚠ |
[ Experimental ] [AArch64 and ] neonFolding minimum of adjacent pairs |
| vpminq_u16⚠ |
[ Experimental ] [AArch64 and ] neonFolding minimum of adjacent pairs |
| vpminq_u32⚠ |
[ Experimental ] [AArch64 and ] neonFolding minimum of adjacent pairs |
| vrsqrte_f32⚠ |
[ Experimental ] [AArch64 and ] neonReciprocal square-root estimate. |
| vsha1cq_u32⚠ |
[ Experimental ] [AArch64 and ] cryptoSHA1 hash update accelerator, choose. |
| vsha1h_u32⚠ |
[ Experimental ] [AArch64 and ] cryptoSHA1 fixed rotate. |
| vsha1mq_u32⚠ |
[ Experimental ] [AArch64 and ] cryptoSHA1 hash update accelerator, majority. |
| vsha1pq_u32⚠ |
[ Experimental ] [AArch64 and ] cryptoSHA1 hash update accelerator, parity. |
| vsha1su0q_u32⚠ |
[ Experimental ] [AArch64 and ] cryptoSHA1 schedule update accelerator, first part. |
| vsha1su1q_u32⚠ |
[ Experimental ] [AArch64 and ] cryptoSHA1 schedule update accelerator, second part. |
| vsha256h2q_u32⚠ |
[ Experimental ] [AArch64 and ] cryptoSHA256 hash update accelerator, upper part. |
| vsha256hq_u32⚠ |
[ Experimental ] [AArch64 and ] cryptoSHA256 hash update accelerator. |
| vsha256su0q_u32⚠ |
[ Experimental ] [AArch64 and ] cryptoSHA256 schedule update accelerator, first part. |
| vsha256su1q_u32⚠ |
[ Experimental ] [AArch64 and ] cryptoSHA256 schedule update accelerator, second part. |