firmware/coreboot: Subtree merged arm-trusted-firmware

Signed-off-by: David Hendricks <dhendricks@fb.com>
This commit is contained in:
David Hendricks
2018-06-14 15:22:44 -07:00
1258 changed files with 204833 additions and 1 deletions

Submodule firmware/coreboot/3rdparty/arm-trusted-firmware deleted from 693e278e30

View File

@@ -0,0 +1,95 @@
#
# Copyright (c) 2016-2018, ARM Limited and Contributors. All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# Neither the name of ARM nor the names of its contributors may be used
# to endorse or promote products derived from this software without specific
# prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
# POSSIBILITY OF SUCH DAMAGE.
#
#
# Configure how the Linux checkpatch script should be invoked in the context of
# the Trusted Firmware source tree.
#
# This is not Linux so don't expect a Linux tree!
--no-tree
# The Linux kernel expects the SPDX license tag in the first line of each file.
# We don't follow this in the Trusted Firmware.
--ignore SPDX_LICENSE_TAG
# This clarifes the lines indications in the report.
#
# E.g.:
# Without this option, we have the following output:
# #333: FILE: drivers/arm/gic/arm_gic.c:160:
# So we have 2 lines indications (333 and 160), which is confusing.
# We only care about the position in the source file.
#
# With this option, it becomes:
# drivers/arm/gic/arm_gic.c:160:
--showfile
# Don't show some messages like the list of ignored types or the suggestion to
# use "--fix" or report changes to the maintainers.
--quiet
#
# Ignore the following message types, as they don't necessarily make sense in
# the context of the Trusted Firmware.
#
# COMPLEX_MACRO generates false positives.
--ignore COMPLEX_MACRO
# Commit messages might contain a Gerrit Change-Id.
--ignore GERRIT_CHANGE_ID
# Do not check the format of commit messages, as Github's merge commits do not
# observe it.
--ignore GIT_COMMIT_ID
# FILE_PATH_CHANGES reports this kind of message:
# "added, moved or deleted file(s), does MAINTAINERS need updating?"
# We do not use this MAINTAINERS file process in TF.
--ignore FILE_PATH_CHANGES
# AVOID_EXTERNS reports this kind of messages:
# "externs should be avoided in .c files"
# We don't follow this convention in TF.
--ignore AVOID_EXTERNS
# NEW_TYPEDEFS reports this kind of messages:
# "do not add new typedefs"
# We allow adding new typedefs in TF.
--ignore NEW_TYPEDEFS
# Avoid "Does not appear to be a unified-diff format patch" message
--ignore NOT_UNIFIED_DIFF
# VOLATILE reports this kind of messages:
# "Use of volatile is usually wrong: see Documentation/volatile-considered-harmful.txt"
# We allow the usage of the volatile keyword in TF.
--ignore VOLATILE

View File

@@ -0,0 +1,62 @@
#
# Copyright (c) 2017, ARM Limited and Contributors. All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
#
# ARM Trusted Firmware Coding style spec for editors.
# References:
# [EC] http://editorconfig.org/
# [CONT] contributing.rst
# [LCS] Linux Coding Style
# (https://www.kernel.org/doc/html/v4.10/process/coding-style.html)
root = true
# set default to match [LCS] .c/.h settings.
# This will also apply to .S, .mk, .sh, Makefile, .dts, etc.
[*]
# Not specified, but fits current ARM-TF sources.
charset = utf-8
# Not specified, but implicit for "LINUX coding style".
end_of_line = lf
# [LCS] Chapter 1: Indentation
# "and thus indentations are also 8 characters"
indent_size = 8
# [LCS] Chapter 1: Indentation
# "Outside of comments,...spaces are never used for indentation"
indent_style = tab
# Not specified by [LCS], but sensible
insert_final_newline = true
# [LCS] Chapter 2: Breaking long lines and strings
# "The limit on the length of lines is 80 columns"
# This is a "soft" requirement for Arm-TF, and should not be the sole
# reason for changes.
max_line_length = 80
# [LCS] Chapter 1: Indentation
# "Tabs are 8 characters"
tab_width = 8
# [LCS] Chapter 1: Indentation
# "Get a decent editor and don't leave whitespace at the end of lines."
# [LCS] Chapter 3.1: Spaces
# "Do not leave trailing whitespace at the ends of lines."
trim_trailing_whitespace = true
# Adjustment for existing .rst files with different format
[*.{rst,md}]
indent_size = 4
indent_style = space
max_line_length = 180
# 180 only selected to prevent changes to existing text.
tab_width = 4

View File

@@ -0,0 +1,26 @@
# Ignore miscellaneous files
cscope.*
*.swp
*.patch
*~
.project
.cproject
# Ignore build directory
build/
# Ignore build products from tools
tools/**/*.o
tools/fip_create/
tools/fiptool/fiptool
tools/fiptool/fiptool.exe
tools/cert_create/src/*.o
tools/cert_create/src/**/*.o
tools/cert_create/cert_create
tools/cert_create/cert_create.exe
# GNU GLOBAL files
GPATH
GRTAGS
GSYMS
GTAGS

View File

@@ -0,0 +1,845 @@
#
# Copyright (c) 2013-2018, ARM Limited and Contributors. All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
#
#
# Trusted Firmware Version
#
VERSION_MAJOR := 1
VERSION_MINOR := 5
# Default goal is build all images
.DEFAULT_GOAL := all
# Avoid any implicit propagation of command line variable definitions to
# sub-Makefiles, like CFLAGS that we reserved for the firmware images'
# usage. Other command line options like "-s" are still propagated as usual.
MAKEOVERRIDES =
MAKE_HELPERS_DIRECTORY := make_helpers/
include ${MAKE_HELPERS_DIRECTORY}build_macros.mk
include ${MAKE_HELPERS_DIRECTORY}build_env.mk
################################################################################
# Default values for build configurations, and their dependencies
################################################################################
ifdef ASM_ASSERTION
$(warning ASM_ASSERTION is removed, use ENABLE_ASSERTIONS instead.)
endif
include ${MAKE_HELPERS_DIRECTORY}defaults.mk
# Assertions enabled for DEBUG builds by default
ENABLE_ASSERTIONS := ${DEBUG}
ENABLE_PMF := ${ENABLE_RUNTIME_INSTRUMENTATION}
PLAT := ${DEFAULT_PLAT}
################################################################################
# Checkpatch script options
################################################################################
CHECKCODE_ARGS := --no-patch
# Do not check the coding style on imported library files or documentation files
INC_LIB_DIRS_TO_CHECK := $(sort $(filter-out \
include/lib/libfdt \
include/lib/stdlib, \
$(wildcard include/lib/*)))
INC_DIRS_TO_CHECK := $(sort $(filter-out \
include/lib, \
$(wildcard include/*)))
LIB_DIRS_TO_CHECK := $(sort $(filter-out \
lib/compiler-rt \
lib/libfdt% \
lib/stdlib, \
$(wildcard lib/*)))
ROOT_DIRS_TO_CHECK := $(sort $(filter-out \
lib \
include \
docs \
%.md, \
$(wildcard *)))
CHECK_PATHS := ${ROOT_DIRS_TO_CHECK} \
${INC_DIRS_TO_CHECK} \
${INC_LIB_DIRS_TO_CHECK} \
${LIB_DIRS_TO_CHECK}
################################################################################
# Process build options
################################################################################
# Verbose flag
ifeq (${V},0)
Q:=@
CHECKCODE_ARGS += --no-summary --terse
else
Q:=
endif
export Q
# Process Debug flag
$(eval $(call add_define,DEBUG))
ifneq (${DEBUG}, 0)
BUILD_TYPE := debug
TF_CFLAGS += -g
ASFLAGS += -g -Wa,--gdwarf-2
# Use LOG_LEVEL_INFO by default for debug builds
LOG_LEVEL := 40
else
BUILD_TYPE := release
# Use LOG_LEVEL_NOTICE by default for release builds
LOG_LEVEL := 20
endif
# Default build string (git branch and commit)
ifeq (${BUILD_STRING},)
BUILD_STRING := $(shell git describe --always --dirty --tags 2> /dev/null)
endif
VERSION_STRING := v${VERSION_MAJOR}.${VERSION_MINOR}(${BUILD_TYPE}):${BUILD_STRING}
# The cert_create tool cannot generate certificates individually, so we use the
# target 'certificates' to create them all
ifneq (${GENERATE_COT},0)
FIP_DEPS += certificates
FWU_FIP_DEPS += fwu_certificates
endif
################################################################################
# Toolchain
################################################################################
HOSTCC := gcc
export HOSTCC
CC := ${CROSS_COMPILE}gcc
CPP := ${CROSS_COMPILE}cpp
AS := ${CROSS_COMPILE}gcc
AR := ${CROSS_COMPILE}ar
LD := ${CROSS_COMPILE}ld
OC := ${CROSS_COMPILE}objcopy
OD := ${CROSS_COMPILE}objdump
NM := ${CROSS_COMPILE}nm
PP := ${CROSS_COMPILE}gcc -E
DTC := dtc
# Use ${LD}.bfd instead if it exists (as absolute path or together with $PATH).
ifneq ($(strip $(wildcard ${LD}.bfd) \
$(foreach dir,$(subst :, ,${PATH}),$(wildcard ${dir}/${LD}.bfd))),)
LD := ${LD}.bfd
endif
ifeq (${ARM_ARCH_MAJOR},7)
target32-directive = -target arm-none-eabi
# Will set march32-directive from platform configuration
else
target32-directive = -target armv8a-none-eabi
march32-directive = -march=armv8-a
endif
ifeq ($(notdir $(CC)),armclang)
TF_CFLAGS_aarch32 = -target arm-arm-none-eabi $(march32-directive)
TF_CFLAGS_aarch64 = -target aarch64-arm-none-eabi -march=armv8-a
else ifneq ($(findstring clang,$(notdir $(CC))),)
TF_CFLAGS_aarch32 = $(target32-directive)
TF_CFLAGS_aarch64 = -target aarch64-elf
else
TF_CFLAGS_aarch32 = $(march32-directive)
TF_CFLAGS_aarch64 = -march=armv8-a
endif
TF_CFLAGS_aarch64 += -mgeneral-regs-only -mstrict-align
ASFLAGS_aarch32 = $(march32-directive)
ASFLAGS_aarch64 = -march=armv8-a
CPPFLAGS = ${DEFINES} ${INCLUDES} -nostdinc \
-Wmissing-include-dirs -Werror
ASFLAGS += $(CPPFLAGS) $(ASFLAGS_$(ARCH)) \
-D__ASSEMBLY__ -ffreestanding \
-Wa,--fatal-warnings
TF_CFLAGS += $(CPPFLAGS) $(TF_CFLAGS_$(ARCH)) \
-ffreestanding -fno-builtin -Wall -std=gnu99 \
-Os -ffunction-sections -fdata-sections
GCC_V_OUTPUT := $(shell $(CC) -v 2>&1)
PIE_FOUND := $(findstring --enable-default-pie,${GCC_V_OUTPUT})
ifneq ($(PIE_FOUND),)
TF_CFLAGS += -fno-PIE
endif
TF_LDFLAGS += --fatal-warnings -O1
TF_LDFLAGS += --gc-sections
TF_LDFLAGS += $(TF_LDFLAGS_$(ARCH))
DTC_FLAGS += -I dts -O dtb
################################################################################
# Common sources and include directories
################################################################################
include lib/compiler-rt/compiler-rt.mk
include lib/stdlib/stdlib.mk
BL_COMMON_SOURCES += common/bl_common.c \
common/tf_log.c \
common/tf_printf.c \
common/tf_snprintf.c \
common/${ARCH}/debug.S \
lib/${ARCH}/cache_helpers.S \
lib/${ARCH}/misc_helpers.S \
plat/common/plat_bl_common.c \
plat/common/plat_log_common.c \
plat/common/${ARCH}/plat_common.c \
plat/common/${ARCH}/platform_helpers.S \
${COMPILER_RT_SRCS} \
${STDLIB_SRCS}
INCLUDES += -Iinclude/bl1 \
-Iinclude/bl2 \
-Iinclude/bl2u \
-Iinclude/bl31 \
-Iinclude/common \
-Iinclude/common/${ARCH} \
-Iinclude/drivers \
-Iinclude/drivers/arm \
-Iinclude/drivers/auth \
-Iinclude/drivers/io \
-Iinclude/drivers/ti/uart \
-Iinclude/lib \
-Iinclude/lib/${ARCH} \
-Iinclude/lib/cpus \
-Iinclude/lib/cpus/${ARCH} \
-Iinclude/lib/el3_runtime \
-Iinclude/lib/el3_runtime/${ARCH} \
-Iinclude/lib/extensions \
-Iinclude/lib/pmf \
-Iinclude/lib/psci \
-Iinclude/lib/xlat_tables \
-Iinclude/plat/common \
-Iinclude/services \
${PLAT_INCLUDES} \
${SPD_INCLUDES} \
-Iinclude/tools_share
################################################################################
# Generic definitions
################################################################################
include ${MAKE_HELPERS_DIRECTORY}plat_helpers.mk
BUILD_BASE := ./build
BUILD_PLAT := ${BUILD_BASE}/${PLAT}/${BUILD_TYPE}
SPDS := $(sort $(filter-out none, $(patsubst services/spd/%,%,$(wildcard services/spd/*))))
# Platforms providing their own TBB makefile may override this value
INCLUDE_TBBR_MK := 1
################################################################################
# Include SPD Makefile if one has been specified
################################################################################
ifneq (${SPD},none)
ifeq (${ARCH},aarch32)
$(error "Error: SPD is incompatible with AArch32.")
endif
ifdef EL3_PAYLOAD_BASE
$(warning "SPD and EL3_PAYLOAD_BASE are incompatible build options.")
$(warning "The SPD and its BL32 companion will be present but ignored.")
endif
# We expect to locate an spd.mk under the specified SPD directory
SPD_MAKE := $(wildcard services/spd/${SPD}/${SPD}.mk)
ifeq (${SPD_MAKE},)
$(error Error: No services/spd/${SPD}/${SPD}.mk located)
endif
$(info Including ${SPD_MAKE})
include ${SPD_MAKE}
# If there's BL32 companion for the chosen SPD, we expect that the SPD's
# Makefile would set NEED_BL32 to "yes". In this case, the build system
# supports two mutually exclusive options:
# * BL32 is built from source: then BL32_SOURCES must contain the list
# of source files to build BL32
# * BL32 is a prebuilt binary: then BL32 must point to the image file
# that will be included in the FIP
# If both BL32_SOURCES and BL32 are defined, the binary takes precedence
# over the sources.
endif
################################################################################
# Include the platform specific Makefile after the SPD Makefile (the platform
# makefile may use all previous definitions in this file)
################################################################################
include ${PLAT_MAKEFILE_FULL}
$(eval $(call MAKE_PREREQ_DIR,${BUILD_PLAT}))
ifeq (${ARM_ARCH_MAJOR},7)
include make_helpers/armv7-a-cpus.mk
endif
# Platform compatibility is not supported in AArch32
ifneq (${ARCH},aarch32)
# If the platform has not defined ENABLE_PLAT_COMPAT, then enable it by default
ifndef ENABLE_PLAT_COMPAT
ENABLE_PLAT_COMPAT := 1
endif
# Include the platform compatibility helpers for PSCI
ifneq (${ENABLE_PLAT_COMPAT}, 0)
include plat/compat/plat_compat.mk
endif
endif
# Include the CPU specific operations makefile, which provides default
# values for all CPU errata workarounds and CPU specific optimisations.
# This can be overridden by the platform.
include lib/cpus/cpu-ops.mk
ifeq (${ARCH},aarch32)
NEED_BL32 := yes
################################################################################
# Build `AARCH32_SP` as BL32 image for AArch32
################################################################################
ifneq (${AARCH32_SP},none)
# We expect to locate an sp.mk under the specified AARCH32_SP directory
AARCH32_SP_MAKE := $(wildcard bl32/${AARCH32_SP}/${AARCH32_SP}.mk)
ifeq (${AARCH32_SP_MAKE},)
$(error Error: No bl32/${AARCH32_SP}/${AARCH32_SP}.mk located)
endif
$(info Including ${AARCH32_SP_MAKE})
include ${AARCH32_SP_MAKE}
endif
endif
################################################################################
# Check incompatible options
################################################################################
ifdef EL3_PAYLOAD_BASE
ifdef PRELOADED_BL33_BASE
$(warning "PRELOADED_BL33_BASE and EL3_PAYLOAD_BASE are \
incompatible build options. EL3_PAYLOAD_BASE has priority.")
endif
ifneq (${GENERATE_COT},0)
$(error "GENERATE_COT and EL3_PAYLOAD_BASE are incompatible build options.")
endif
ifneq (${TRUSTED_BOARD_BOOT},0)
$(error "TRUSTED_BOARD_BOOT and EL3_PAYLOAD_BASE are incompatible build options.")
endif
endif
ifeq (${NEED_BL33},yes)
ifdef EL3_PAYLOAD_BASE
$(warning "BL33 image is not needed when option \
BL33_PAYLOAD_BASE is used and won't be added to the FIP file.")
endif
ifdef PRELOADED_BL33_BASE
$(warning "BL33 image is not needed when option \
PRELOADED_BL33_BASE is used and won't be added to the FIP \
file.")
endif
endif
# For AArch32, LOAD_IMAGE_V2 must be enabled.
ifeq (${ARCH},aarch32)
ifeq (${LOAD_IMAGE_V2}, 0)
$(error "For AArch32, LOAD_IMAGE_V2 must be enabled.")
endif
endif
# When building for systems with hardware-assisted coherency, there's no need to
# use USE_COHERENT_MEM. Require that USE_COHERENT_MEM must be set to 0 too.
ifeq ($(HW_ASSISTED_COHERENCY)-$(USE_COHERENT_MEM),1-1)
$(error USE_COHERENT_MEM cannot be enabled with HW_ASSISTED_COHERENCY)
endif
ifneq ($(MULTI_CONSOLE_API), 0)
ifeq (${ARCH},aarch32)
$(error "Error: MULTI_CONSOLE_API is not supported for AArch32")
endif
endif
#For now, BL2_IN_XIP_MEM is only supported when BL2_AT_EL3 is 1.
ifeq ($(BL2_AT_EL3)-$(BL2_IN_XIP_MEM),0-1)
$(error "BL2_IN_XIP_MEM is only supported when BL2_AT_EL3 is enabled")
endif
# SMC Calling Convention checks
ifneq (${SMCCC_MAJOR_VERSION},1)
ifneq (${SPD},none)
$(error "SMC Calling Convention 1.X must be used with SPDs")
endif
ifeq (${ARCH},aarch32)
$(error "Only SMCCC 1.X is supported in AArch32 mode.")
endif
endif
# For RAS_EXTENSION, require that EAs are handled in EL3 first
ifeq ($(RAS_EXTENSION),1)
ifneq ($(HANDLE_EA_EL3_FIRST),1)
$(error For RAS_EXTENSION, HANDLE_EA_EL3_FIRST must also be 1)
endif
endif
# When FAULT_INJECTION_SUPPORT is used, require that RAS_EXTENSION is enabled
ifeq ($(FAULT_INJECTION_SUPPORT),1)
ifneq ($(RAS_EXTENSION),1)
$(error For FAULT_INJECTION_SUPPORT, RAS_EXTENSION must also be 1)
endif
endif
# DYN_DISABLE_AUTH can be set only when TRUSTED_BOARD_BOOT=1 and LOAD_IMAGE_V2=1
ifeq ($(DYN_DISABLE_AUTH), 1)
ifeq (${TRUSTED_BOARD_BOOT}, 0)
$(error "TRUSTED_BOARD_BOOT must be enabled for DYN_DISABLE_AUTH to be set.")
endif
ifeq (${LOAD_IMAGE_V2}, 0)
$(error "DYN_DISABLE_AUTH is only supported for LOAD_IMAGE_V2.")
endif
endif
################################################################################
# Process platform overrideable behaviour
################################################################################
# Using the ARM Trusted Firmware BL2 implies that a BL33 image also needs to be
# supplied for the FIP and Certificate generation tools. This flag can be
# overridden by the platform.
ifdef BL2_SOURCES
ifdef EL3_PAYLOAD_BASE
# If booting an EL3 payload there is no need for a BL33 image
# in the FIP file.
NEED_BL33 := no
else
ifdef PRELOADED_BL33_BASE
# If booting a BL33 preloaded image there is no need of
# another one in the FIP file.
NEED_BL33 := no
else
NEED_BL33 ?= yes
endif
endif
endif
# If SCP_BL2 is given, we always want FIP to include it.
ifdef SCP_BL2
NEED_SCP_BL2 := yes
endif
# For AArch32, BL31 is not currently supported.
ifneq (${ARCH},aarch32)
ifdef BL31_SOURCES
# When booting an EL3 payload, there is no need to compile the BL31 image nor
# put it in the FIP.
ifndef EL3_PAYLOAD_BASE
NEED_BL31 := yes
endif
endif
endif
# Process TBB related flags
ifneq (${GENERATE_COT},0)
# Common cert_create options
ifneq (${CREATE_KEYS},0)
$(eval CRT_ARGS += -n)
$(eval FWU_CRT_ARGS += -n)
ifneq (${SAVE_KEYS},0)
$(eval CRT_ARGS += -k)
$(eval FWU_CRT_ARGS += -k)
endif
endif
# Include TBBR makefile (unless the platform indicates otherwise)
ifeq (${INCLUDE_TBBR_MK},1)
include make_helpers/tbbr/tbbr_tools.mk
endif
endif
ifneq (${FIP_ALIGN},0)
FIP_ARGS += --align ${FIP_ALIGN}
endif
################################################################################
# Include libraries' Makefile that are used in all BL
################################################################################
include lib/stack_protector/stack_protector.mk
################################################################################
# Auxiliary tools (fiptool, cert_create, etc)
################################################################################
# Variables for use with Certificate Generation Tool
CRTTOOLPATH ?= tools/cert_create
CRTTOOL ?= ${CRTTOOLPATH}/cert_create${BIN_EXT}
# Variables for use with Firmware Image Package
FIPTOOLPATH ?= tools/fiptool
FIPTOOL ?= ${FIPTOOLPATH}/fiptool${BIN_EXT}
################################################################################
# Include BL specific makefiles
################################################################################
ifdef BL1_SOURCES
NEED_BL1 := yes
include bl1/bl1.mk
endif
ifdef BL2_SOURCES
NEED_BL2 := yes
include bl2/bl2.mk
endif
ifdef BL2U_SOURCES
NEED_BL2U := yes
include bl2u/bl2u.mk
endif
ifeq (${NEED_BL31},yes)
ifdef BL31_SOURCES
include bl31/bl31.mk
endif
endif
ifdef FDT_SOURCES
NEED_FDT := yes
endif
################################################################################
# Build options checks
################################################################################
$(eval $(call assert_boolean,COLD_BOOT_SINGLE_CPU))
$(eval $(call assert_boolean,CREATE_KEYS))
$(eval $(call assert_boolean,CTX_INCLUDE_AARCH32_REGS))
$(eval $(call assert_boolean,CTX_INCLUDE_FPREGS))
$(eval $(call assert_boolean,DEBUG))
$(eval $(call assert_boolean,DISABLE_PEDANTIC))
$(eval $(call assert_boolean,DYN_DISABLE_AUTH))
$(eval $(call assert_boolean,EL3_EXCEPTION_HANDLING))
$(eval $(call assert_boolean,ENABLE_AMU))
$(eval $(call assert_boolean,ENABLE_ASSERTIONS))
$(eval $(call assert_boolean,ENABLE_PLAT_COMPAT))
$(eval $(call assert_boolean,ENABLE_PMF))
$(eval $(call assert_boolean,ENABLE_PSCI_STAT))
$(eval $(call assert_boolean,ENABLE_RUNTIME_INSTRUMENTATION))
$(eval $(call assert_boolean,ENABLE_SPE_FOR_LOWER_ELS))
$(eval $(call assert_boolean,ENABLE_SPM))
$(eval $(call assert_boolean,ENABLE_SVE_FOR_NS))
$(eval $(call assert_boolean,ERROR_DEPRECATED))
$(eval $(call assert_boolean,FAULT_INJECTION_SUPPORT))
$(eval $(call assert_boolean,GENERATE_COT))
$(eval $(call assert_boolean,GICV2_G0_FOR_EL3))
$(eval $(call assert_boolean,HANDLE_EA_EL3_FIRST))
$(eval $(call assert_boolean,HW_ASSISTED_COHERENCY))
$(eval $(call assert_boolean,LOAD_IMAGE_V2))
$(eval $(call assert_boolean,MULTI_CONSOLE_API))
$(eval $(call assert_boolean,NS_TIMER_SWITCH))
$(eval $(call assert_boolean,PL011_GENERIC_UART))
$(eval $(call assert_boolean,PROGRAMMABLE_RESET_ADDRESS))
$(eval $(call assert_boolean,PSCI_EXTENDED_STATE_ID))
$(eval $(call assert_boolean,RAS_EXTENSION))
$(eval $(call assert_boolean,RESET_TO_BL31))
$(eval $(call assert_boolean,SAVE_KEYS))
$(eval $(call assert_boolean,SEPARATE_CODE_AND_RODATA))
$(eval $(call assert_boolean,SPIN_ON_BL1_EXIT))
$(eval $(call assert_boolean,TRUSTED_BOARD_BOOT))
$(eval $(call assert_boolean,USE_COHERENT_MEM))
$(eval $(call assert_boolean,USE_TBBR_DEFS))
$(eval $(call assert_boolean,WARMBOOT_ENABLE_DCACHE_EARLY))
$(eval $(call assert_boolean,BL2_AT_EL3))
$(eval $(call assert_boolean,BL2_IN_XIP_MEM))
$(eval $(call assert_numeric,ARM_ARCH_MAJOR))
$(eval $(call assert_numeric,ARM_ARCH_MINOR))
$(eval $(call assert_numeric,SMCCC_MAJOR_VERSION))
################################################################################
# Add definitions to the cpp preprocessor based on the current build options.
# This is done after including the platform specific makefile to allow the
# platform to overwrite the default options
################################################################################
$(eval $(call add_define,ARM_ARCH_MAJOR))
$(eval $(call add_define,ARM_ARCH_MINOR))
$(eval $(call add_define,ARM_GIC_ARCH))
$(eval $(call add_define,COLD_BOOT_SINGLE_CPU))
$(eval $(call add_define,CTX_INCLUDE_AARCH32_REGS))
$(eval $(call add_define,CTX_INCLUDE_FPREGS))
$(eval $(call add_define,EL3_EXCEPTION_HANDLING))
$(eval $(call add_define,ENABLE_AMU))
$(eval $(call add_define,ENABLE_ASSERTIONS))
$(eval $(call add_define,ENABLE_PLAT_COMPAT))
$(eval $(call add_define,ENABLE_PMF))
$(eval $(call add_define,ENABLE_PSCI_STAT))
$(eval $(call add_define,ENABLE_RUNTIME_INSTRUMENTATION))
$(eval $(call add_define,ENABLE_SPE_FOR_LOWER_ELS))
$(eval $(call add_define,ENABLE_SPM))
$(eval $(call add_define,ENABLE_SVE_FOR_NS))
$(eval $(call add_define,ERROR_DEPRECATED))
$(eval $(call add_define,FAULT_INJECTION_SUPPORT))
$(eval $(call add_define,GICV2_G0_FOR_EL3))
$(eval $(call add_define,HANDLE_EA_EL3_FIRST))
$(eval $(call add_define,HW_ASSISTED_COHERENCY))
$(eval $(call add_define,LOAD_IMAGE_V2))
$(eval $(call add_define,LOG_LEVEL))
$(eval $(call add_define,MULTI_CONSOLE_API))
$(eval $(call add_define,NS_TIMER_SWITCH))
$(eval $(call add_define,PL011_GENERIC_UART))
$(eval $(call add_define,PLAT_${PLAT}))
$(eval $(call add_define,PROGRAMMABLE_RESET_ADDRESS))
$(eval $(call add_define,PSCI_EXTENDED_STATE_ID))
$(eval $(call add_define,RAS_EXTENSION))
$(eval $(call add_define,RESET_TO_BL31))
$(eval $(call add_define,SEPARATE_CODE_AND_RODATA))
$(eval $(call add_define,SMCCC_MAJOR_VERSION))
$(eval $(call add_define,SPD_${SPD}))
$(eval $(call add_define,SPIN_ON_BL1_EXIT))
$(eval $(call add_define,TRUSTED_BOARD_BOOT))
$(eval $(call add_define,USE_COHERENT_MEM))
$(eval $(call add_define,USE_TBBR_DEFS))
$(eval $(call add_define,WARMBOOT_ENABLE_DCACHE_EARLY))
$(eval $(call add_define,BL2_AT_EL3))
$(eval $(call add_define,BL2_IN_XIP_MEM))
# Define the EL3_PAYLOAD_BASE flag only if it is provided.
ifdef EL3_PAYLOAD_BASE
$(eval $(call add_define,EL3_PAYLOAD_BASE))
else
# Define the PRELOADED_BL33_BASE flag only if it is provided and
# EL3_PAYLOAD_BASE is not defined, as it has priority.
ifdef PRELOADED_BL33_BASE
$(eval $(call add_define,PRELOADED_BL33_BASE))
endif
endif
# Define the AARCH32/AARCH64 flag based on the ARCH flag
ifeq (${ARCH},aarch32)
$(eval $(call add_define,AARCH32))
else
$(eval $(call add_define,AARCH64))
endif
# Define the DYN_DISABLE_AUTH flag only if set.
ifeq (${DYN_DISABLE_AUTH},1)
$(eval $(call add_define,DYN_DISABLE_AUTH))
endif
################################################################################
# Build targets
################################################################################
.PHONY: all msg_start clean realclean distclean cscope locate-checkpatch checkcodebase checkpatch fiptool fip fwu_fip certtool dtbs
.SUFFIXES:
all: msg_start
msg_start:
@echo "Building ${PLAT}"
# Check if deprecated declarations and cpp warnings should be treated as error or not.
ifeq (${ERROR_DEPRECATED},0)
CPPFLAGS += -Wno-error=deprecated-declarations -Wno-error=cpp
endif
# Expand build macros for the different images
ifeq (${NEED_BL1},yes)
$(eval $(call MAKE_BL,1))
endif
ifeq (${NEED_BL2},yes)
ifeq (${BL2_AT_EL3}, 0)
FIP_BL2_ARGS := tb-fw
endif
$(if ${BL2}, $(eval $(call TOOL_ADD_IMG,bl2,--${FIP_BL2_ARGS})),\
$(eval $(call MAKE_BL,2,${FIP_BL2_ARGS})))
endif
ifeq (${NEED_SCP_BL2},yes)
$(eval $(call TOOL_ADD_IMG,scp_bl2,--scp-fw))
endif
ifeq (${NEED_BL31},yes)
BL31_SOURCES += ${SPD_SOURCES}
$(if ${BL31}, $(eval $(call TOOL_ADD_IMG,bl31,--soc-fw)),\
$(eval $(call MAKE_BL,31,soc-fw)))
endif
# If a BL32 image is needed but neither BL32 nor BL32_SOURCES is defined, the
# build system will call TOOL_ADD_IMG to print a warning message and abort the
# process. Note that the dependency on BL32 applies to the FIP only.
ifeq (${NEED_BL32},yes)
BUILD_BL32 := $(if $(BL32),,$(if $(BL32_SOURCES),1))
$(if ${BUILD_BL32}, $(eval $(call MAKE_BL,32,tos-fw)),\
$(eval $(call TOOL_ADD_IMG,bl32,--tos-fw)))
endif
# Add the BL33 image if required by the platform
ifeq (${NEED_BL33},yes)
$(eval $(call TOOL_ADD_IMG,bl33,--nt-fw))
endif
ifeq (${NEED_BL2U},yes)
$(if ${BL2U}, $(eval $(call TOOL_ADD_IMG,bl2u,--ap-fwu-cfg,FWU_)),\
$(eval $(call MAKE_BL,2u,ap-fwu-cfg,FWU_)))
endif
# Expand build macros for the different images
ifeq (${NEED_FDT},yes)
$(eval $(call MAKE_DTBS,$(BUILD_PLAT)/fdts,$(FDT_SOURCES)))
endif
locate-checkpatch:
ifndef CHECKPATCH
$(error "Please set CHECKPATCH to point to the Linux checkpatch.pl file, eg: CHECKPATCH=../linux/scripts/checkpatch.pl")
else
ifeq (,$(wildcard ${CHECKPATCH}))
$(error "The file CHECKPATCH points to cannot be found, use eg: CHECKPATCH=../linux/scripts/checkpatch.pl")
endif
endif
clean:
@echo " CLEAN"
$(call SHELL_REMOVE_DIR,${BUILD_PLAT})
${Q}${MAKE} --no-print-directory -C ${FIPTOOLPATH} clean
${Q}${MAKE} PLAT=${PLAT} --no-print-directory -C ${CRTTOOLPATH} clean
realclean distclean:
@echo " REALCLEAN"
$(call SHELL_REMOVE_DIR,${BUILD_BASE})
$(call SHELL_DELETE_ALL, ${CURDIR}/cscope.*)
${Q}${MAKE} --no-print-directory -C ${FIPTOOLPATH} clean
${Q}${MAKE} PLAT=${PLAT} --no-print-directory -C ${CRTTOOLPATH} clean
checkcodebase: locate-checkpatch
@echo " CHECKING STYLE"
@if test -d .git ; then \
git ls-files | grep -E -v 'libfdt|stdlib|docs|\.md' | \
while read GIT_FILE ; \
do ${CHECKPATCH} ${CHECKCODE_ARGS} -f $$GIT_FILE ; \
done ; \
else \
find . -type f -not -iwholename "*.git*" \
-not -iwholename "*build*" \
-not -iwholename "*libfdt*" \
-not -iwholename "*stdlib*" \
-not -iwholename "*docs*" \
-not -iwholename "*.md" \
-exec ${CHECKPATCH} ${CHECKCODE_ARGS} -f {} \; ; \
fi
checkpatch: locate-checkpatch
@echo " CHECKING STYLE"
${Q}COMMON_COMMIT=$$(git merge-base HEAD ${BASE_COMMIT}); \
for commit in `git rev-list $$COMMON_COMMIT..HEAD`; do \
printf "\n[*] Checking style of '$$commit'\n\n"; \
git log --format=email "$$commit~..$$commit" \
-- ${CHECK_PATHS} | ${CHECKPATCH} - || true; \
git diff --format=email "$$commit~..$$commit" \
-- ${CHECK_PATHS} | ${CHECKPATCH} - || true; \
done
certtool: ${CRTTOOL}
.PHONY: ${CRTTOOL}
${CRTTOOL}:
${Q}${MAKE} PLAT=${PLAT} USE_TBBR_DEFS=${USE_TBBR_DEFS} --no-print-directory -C ${CRTTOOLPATH}
@${ECHO_BLANK_LINE}
@echo "Built $@ successfully"
@${ECHO_BLANK_LINE}
ifneq (${GENERATE_COT},0)
certificates: ${CRT_DEPS} ${CRTTOOL}
${Q}${CRTTOOL} ${CRT_ARGS}
@${ECHO_BLANK_LINE}
@echo "Built $@ successfully"
@echo "Certificates can be found in ${BUILD_PLAT}"
@${ECHO_BLANK_LINE}
endif
${BUILD_PLAT}/${FIP_NAME}: ${FIP_DEPS} ${FIPTOOL}
${Q}${FIPTOOL} create ${FIP_ARGS} $@
${Q}${FIPTOOL} info $@
@${ECHO_BLANK_LINE}
@echo "Built $@ successfully"
@${ECHO_BLANK_LINE}
ifneq (${GENERATE_COT},0)
fwu_certificates: ${FWU_CRT_DEPS} ${CRTTOOL}
${Q}${CRTTOOL} ${FWU_CRT_ARGS}
@${ECHO_BLANK_LINE}
@echo "Built $@ successfully"
@echo "FWU certificates can be found in ${BUILD_PLAT}"
@${ECHO_BLANK_LINE}
endif
${BUILD_PLAT}/${FWU_FIP_NAME}: ${FWU_FIP_DEPS} ${FIPTOOL}
${Q}${FIPTOOL} create ${FWU_FIP_ARGS} $@
${Q}${FIPTOOL} info $@
@${ECHO_BLANK_LINE}
@echo "Built $@ successfully"
@${ECHO_BLANK_LINE}
fiptool: ${FIPTOOL}
fip: ${BUILD_PLAT}/${FIP_NAME}
fwu_fip: ${BUILD_PLAT}/${FWU_FIP_NAME}
.PHONY: ${FIPTOOL}
${FIPTOOL}:
${Q}${MAKE} CPPFLAGS="-DVERSION='\"${VERSION_STRING}\"'" --no-print-directory -C ${FIPTOOLPATH}
cscope:
@echo " CSCOPE"
${Q}find ${CURDIR} -name "*.[chsS]" > cscope.files
${Q}cscope -b -q -k
help:
@echo "usage: ${MAKE} PLAT=<${PLATFORM_LIST}> [OPTIONS] [TARGET]"
@echo ""
@echo "PLAT is used to specify which platform you wish to build."
@echo "If no platform is specified, PLAT defaults to: ${DEFAULT_PLAT}"
@echo ""
@echo "Please refer to the User Guide for a list of all supported options."
@echo "Note that the build system doesn't track dependencies for build "
@echo "options. Therefore, if any of the build options are changed "
@echo "from a previous build, a clean build must be performed."
@echo ""
@echo "Supported Targets:"
@echo " all Build all individual bootloader binaries"
@echo " bl1 Build the BL1 binary"
@echo " bl2 Build the BL2 binary"
@echo " bl2u Build the BL2U binary"
@echo " bl31 Build the BL31 binary"
@echo " bl32 Build the BL32 binary. If ARCH=aarch32, then "
@echo " this builds secure payload specified by AARCH32_SP"
@echo " certificates Build the certificates (requires 'GENERATE_COT=1')"
@echo " fip Build the Firmware Image Package (FIP)"
@echo " fwu_fip Build the FWU Firmware Image Package (FIP)"
@echo " checkcodebase Check the coding style of the entire source tree"
@echo " checkpatch Check the coding style on changes in the current"
@echo " branch against BASE_COMMIT (default origin/master)"
@echo " clean Clean the build for the selected platform"
@echo " cscope Generate cscope index"
@echo " distclean Remove all build artifacts for all platforms"
@echo " certtool Build the Certificate generation tool"
@echo " fiptool Build the Firmware Image Package (FIP) creation tool"
@echo " dtbs Build the Device Tree Blobs (if required for the platform)"
@echo ""
@echo "Note: most build targets require PLAT to be set to a specific platform."
@echo ""
@echo "example: build all targets for the FVP platform:"
@echo " CROSS_COMPILE=aarch64-none-elf- make PLAT=fvp all"

View File

@@ -0,0 +1,18 @@
Contributor Acknowledgements
============================
Companies
---------
Linaro Limited
NVIDIA Corporation
Socionext Inc.
Xilinx, Inc.
NXP Semiconductors
Individuals
-----------

View File

@@ -0,0 +1,15 @@
/*
* Copyright (c) 2016, ARM Limited and Contributors. All rights reserved.
*
* SPDX-License-Identifier: BSD-3-Clause
*/
#include "../bl1_private.h"
/*******************************************************************************
* TODO: Function that does the first bit of architectural setup.
******************************************************************************/
void bl1_arch_setup(void)
{
}

View File

@@ -0,0 +1,177 @@
/*
* Copyright (c) 2016-2018, ARM Limited and Contributors. All rights reserved.
*
* SPDX-License-Identifier: BSD-3-Clause
*/
#include <arch_helpers.h>
#include <assert.h>
#include <context.h>
#include <context_mgmt.h>
#include <debug.h>
#include <platform.h>
#include <smccc_helpers.h>
#include "../bl1_private.h"
/*
* Following arrays will be used for context management.
* There are 2 instances, for the Secure and Non-Secure contexts.
*/
static cpu_context_t bl1_cpu_context[2];
static smc_ctx_t bl1_smc_context[2];
/* Following contains the next cpu context pointer. */
static void *bl1_next_cpu_context_ptr;
/* Following contains the next smc context pointer. */
static void *bl1_next_smc_context_ptr;
/* Following functions are used for SMC context handling */
void *smc_get_ctx(unsigned int security_state)
{
assert(sec_state_is_valid(security_state));
return &bl1_smc_context[security_state];
}
void smc_set_next_ctx(unsigned int security_state)
{
assert(sec_state_is_valid(security_state));
bl1_next_smc_context_ptr = &bl1_smc_context[security_state];
}
void *smc_get_next_ctx(void)
{
return bl1_next_smc_context_ptr;
}
/* Following functions are used for CPU context handling */
void *cm_get_context(uint32_t security_state)
{
assert(sec_state_is_valid(security_state));
return &bl1_cpu_context[security_state];
}
void cm_set_next_context(void *cpu_context)
{
assert(cpu_context);
bl1_next_cpu_context_ptr = cpu_context;
}
void *cm_get_next_context(void)
{
return bl1_next_cpu_context_ptr;
}
/*******************************************************************************
* Following function copies GP regs r0-r4, lr and spsr,
* from the CPU context to the SMC context structures.
******************************************************************************/
static void copy_cpu_ctx_to_smc_ctx(const regs_t *cpu_reg_ctx,
smc_ctx_t *next_smc_ctx)
{
next_smc_ctx->r0 = read_ctx_reg(cpu_reg_ctx, CTX_GPREG_R0);
next_smc_ctx->r1 = read_ctx_reg(cpu_reg_ctx, CTX_GPREG_R1);
next_smc_ctx->r2 = read_ctx_reg(cpu_reg_ctx, CTX_GPREG_R2);
next_smc_ctx->r3 = read_ctx_reg(cpu_reg_ctx, CTX_GPREG_R3);
next_smc_ctx->lr_mon = read_ctx_reg(cpu_reg_ctx, CTX_LR);
next_smc_ctx->spsr_mon = read_ctx_reg(cpu_reg_ctx, CTX_SPSR);
next_smc_ctx->scr = read_ctx_reg(cpu_reg_ctx, CTX_SCR);
}
/*******************************************************************************
* Following function flushes the SMC & CPU context pointer and its data.
******************************************************************************/
static void flush_smc_and_cpu_ctx(void)
{
flush_dcache_range((uintptr_t)&bl1_next_smc_context_ptr,
sizeof(bl1_next_smc_context_ptr));
flush_dcache_range((uintptr_t)bl1_next_smc_context_ptr,
sizeof(smc_ctx_t));
flush_dcache_range((uintptr_t)&bl1_next_cpu_context_ptr,
sizeof(bl1_next_cpu_context_ptr));
flush_dcache_range((uintptr_t)bl1_next_cpu_context_ptr,
sizeof(cpu_context_t));
}
/*******************************************************************************
* This function prepares the context for Secure/Normal world images.
* Normal world images are transitioned to HYP(if supported) else SVC.
******************************************************************************/
void bl1_prepare_next_image(unsigned int image_id)
{
unsigned int security_state;
image_desc_t *image_desc;
entry_point_info_t *next_bl_ep;
/* Get the image descriptor. */
image_desc = bl1_plat_get_image_desc(image_id);
assert(image_desc);
/* Get the entry point info. */
next_bl_ep = &image_desc->ep_info;
/* Get the image security state. */
security_state = GET_SECURITY_STATE(next_bl_ep->h.attr);
/* Prepare the SPSR for the next BL image. */
if (security_state == SECURE) {
next_bl_ep->spsr = SPSR_MODE32(MODE32_svc, SPSR_T_ARM,
SPSR_E_LITTLE, DISABLE_ALL_EXCEPTIONS);
} else {
/* Use HYP mode if supported else use SVC. */
if (GET_VIRT_EXT(read_id_pfr1())) {
next_bl_ep->spsr = SPSR_MODE32(MODE32_hyp, SPSR_T_ARM,
SPSR_E_LITTLE, DISABLE_ALL_EXCEPTIONS);
} else {
next_bl_ep->spsr = SPSR_MODE32(MODE32_svc, SPSR_T_ARM,
SPSR_E_LITTLE, DISABLE_ALL_EXCEPTIONS);
}
}
/* Allow platform to make change */
bl1_plat_set_ep_info(image_id, next_bl_ep);
/* Prepare the cpu context for the next BL image. */
cm_init_my_context(next_bl_ep);
cm_prepare_el3_exit(security_state);
cm_set_next_context(cm_get_context(security_state));
/* Prepare the smc context for the next BL image. */
smc_set_next_ctx(security_state);
copy_cpu_ctx_to_smc_ctx(get_regs_ctx(cm_get_next_context()),
smc_get_next_ctx());
/*
* If the next image is non-secure, then we need to program the banked
* non secure sctlr. This is not required when the next image is secure
* because in AArch32, we expect the secure world to have the same
* SCTLR settings.
*/
if (security_state == NON_SECURE) {
cpu_context_t *ctx = cm_get_context(security_state);
u_register_t ns_sctlr;
/* Temporarily set the NS bit to access NS SCTLR */
write_scr(read_scr() | SCR_NS_BIT);
isb();
ns_sctlr = read_ctx_reg(get_regs_ctx(ctx), CTX_NS_SCTLR);
write_sctlr(ns_sctlr);
isb();
write_scr(read_scr() & ~SCR_NS_BIT);
isb();
}
/*
* Flush the SMC & CPU context and the (next)pointers,
* to access them after caches are disabled.
*/
flush_smc_and_cpu_ctx();
/* Indicate that image is in execution state. */
image_desc->state = IMAGE_STATE_EXECUTED;
print_entry_point_info(next_bl_ep);
}

View File

@@ -0,0 +1,100 @@
/*
* Copyright (c) 2016-2018, ARM Limited and Contributors. All rights reserved.
*
* SPDX-License-Identifier: BSD-3-Clause
*/
#include <arch.h>
#include <asm_macros.S>
#include <bl_common.h>
#include <context.h>
#include <el3_common_macros.S>
#include <smccc_helpers.h>
#include <smccc_macros.S>
.globl bl1_vector_table
.globl bl1_entrypoint
/* -----------------------------------------------------
* Setup the vector table to support SVC & MON mode.
* -----------------------------------------------------
*/
vector_base bl1_vector_table
b bl1_entrypoint
b report_exception /* Undef */
b bl1_aarch32_smc_handler /* SMC call */
b report_exception /* Prefetch abort */
b report_exception /* Data abort */
b report_exception /* Reserved */
b report_exception /* IRQ */
b report_exception /* FIQ */
/* -----------------------------------------------------
* bl1_entrypoint() is the entry point into the trusted
* firmware code when a cpu is released from warm or
* cold reset.
* -----------------------------------------------------
*/
func bl1_entrypoint
/* ---------------------------------------------------------------------
* If the reset address is programmable then bl1_entrypoint() is
* executed only on the cold boot path. Therefore, we can skip the warm
* boot mailbox mechanism.
* ---------------------------------------------------------------------
*/
el3_entrypoint_common \
_init_sctlr=1 \
_warm_boot_mailbox=!PROGRAMMABLE_RESET_ADDRESS \
_secondary_cold_boot=!COLD_BOOT_SINGLE_CPU \
_init_memory=1 \
_init_c_runtime=1 \
_exception_vectors=bl1_vector_table
/* -----------------------------------------------------
* Perform early platform setup & platform
* specific early arch. setup e.g. mmu setup
* -----------------------------------------------------
*/
bl bl1_early_platform_setup
bl bl1_plat_arch_setup
/* -----------------------------------------------------
* Jump to main function.
* -----------------------------------------------------
*/
bl bl1_main
/* -----------------------------------------------------
* Jump to next image.
* -----------------------------------------------------
*/
/*
* Get the smc_context for next BL image,
* program the gp/system registers and save it in `r4`.
*/
bl smc_get_next_ctx
mov r4, r0
/* Only turn-off MMU if going to secure world */
ldr r5, [r4, #SMC_CTX_SCR]
tst r5, #SCR_NS_BIT
bne skip_mmu_off
/*
* MMU needs to be disabled because both BL1 and BL2/BL2U execute
* in PL1, and therefore share the same address space.
* BL2/BL2U will initialize the address space according to its
* own requirement.
*/
bl disable_mmu_icache_secure
stcopr r0, TLBIALL
dsb sy
isb
skip_mmu_off:
/* Restore smc_context from `r4` and exit secure monitor mode. */
mov r0, r4
monitor_exit
endfunc bl1_entrypoint

View File

@@ -0,0 +1,157 @@
/*
* Copyright (c) 2016-2018, ARM Limited and Contributors. All rights reserved.
*
* SPDX-License-Identifier: BSD-3-Clause
*/
#include <arch.h>
#include <asm_macros.S>
#include <bl1.h>
#include <bl_common.h>
#include <context.h>
#include <smccc_helpers.h>
#include <smccc_macros.S>
#include <xlat_tables.h>
.globl bl1_aarch32_smc_handler
func bl1_aarch32_smc_handler
/* On SMC entry, `sp` points to `smc_ctx_t`. Save `lr`. */
str lr, [sp, #SMC_CTX_LR_MON]
/* ------------------------------------------------
* SMC in BL1 is handled assuming that the MMU is
* turned off by BL2.
* ------------------------------------------------
*/
/* ----------------------------------------------
* Detect if this is a RUN_IMAGE or other SMC.
* ----------------------------------------------
*/
mov lr, #BL1_SMC_RUN_IMAGE
cmp lr, r0
bne smc_handler
/* ------------------------------------------------
* Make sure only Secure world reaches here.
* ------------------------------------------------
*/
ldcopr r8, SCR
tst r8, #SCR_NS_BIT
blne report_exception
/* ---------------------------------------------------------------------
* Pass control to next secure image.
* Here it expects r1 to contain the address of a entry_point_info_t
* structure describing the BL entrypoint.
* ---------------------------------------------------------------------
*/
mov r8, r1
mov r0, r1
bl bl1_print_next_bl_ep_info
#if SPIN_ON_BL1_EXIT
bl print_debug_loop_message
debug_loop:
b debug_loop
#endif
mov r0, r8
bl bl1_plat_prepare_exit
stcopr r0, TLBIALL
dsb sy
isb
/*
* Extract PC and SPSR based on struct `entry_point_info_t`
* and load it in LR and SPSR registers respectively.
*/
ldr lr, [r8, #ENTRY_POINT_INFO_PC_OFFSET]
ldr r1, [r8, #(ENTRY_POINT_INFO_PC_OFFSET + 4)]
msr spsr, r1
/* Some BL32 stages expect lr_svc to provide the BL33 entry address */
cps #MODE32_svc
ldr lr, [r8, #ENTRY_POINT_INFO_LR_SVC_OFFSET]
cps #MODE32_mon
add r8, r8, #ENTRY_POINT_INFO_ARGS_OFFSET
ldm r8, {r0, r1, r2, r3}
eret
endfunc bl1_aarch32_smc_handler
/* -----------------------------------------------------
* Save Secure/Normal world context and jump to
* BL1 SMC handler.
* -----------------------------------------------------
*/
func smc_handler
/* -----------------------------------------------------
* Save the GP registers.
* -----------------------------------------------------
*/
smccc_save_gp_mode_regs
/*
* `sp` still points to `smc_ctx_t`. Save it to a register
* and restore the C runtime stack pointer to `sp`.
*/
mov r6, sp
ldr sp, [r6, #SMC_CTX_SP_MON]
ldr r0, [r6, #SMC_CTX_SCR]
and r7, r0, #SCR_NS_BIT /* flags */
/* Switch to Secure Mode */
bic r0, #SCR_NS_BIT
stcopr r0, SCR
isb
/* If caller is from Secure world then turn on the MMU */
tst r7, #SCR_NS_BIT
bne skip_mmu_on
/* Turn on the MMU */
mov r0, #DISABLE_DCACHE
bl enable_mmu_secure
/* Enable the data cache. */
ldcopr r9, SCTLR
orr r9, r9, #SCTLR_C_BIT
stcopr r9, SCTLR
isb
skip_mmu_on:
/* Prepare arguments for BL1 SMC wrapper. */
ldr r0, [r6, #SMC_CTX_GPREG_R0] /* smc_fid */
mov r1, #0 /* cookie */
mov r2, r6 /* handle */
mov r3, r7 /* flags */
bl bl1_smc_wrapper
/* Get the smc_context for next BL image */
bl smc_get_next_ctx
mov r4, r0
/* Only turn-off MMU if going to secure world */
ldr r5, [r4, #SMC_CTX_SCR]
tst r5, #SCR_NS_BIT
bne skip_mmu_off
/* Disable the MMU */
bl disable_mmu_icache_secure
stcopr r0, TLBIALL
dsb sy
isb
skip_mmu_off:
/* -----------------------------------------------------
* Do the transition to next BL image.
* -----------------------------------------------------
*/
mov r0, r4
monitor_exit
endfunc smc_handler

View File

@@ -0,0 +1,35 @@
/*
* Copyright (c) 2013-2014, ARM Limited and Contributors. All rights reserved.
*
* SPDX-License-Identifier: BSD-3-Clause
*/
#include <arch.h>
#include <arch_helpers.h>
#include "../bl1_private.h"
/*******************************************************************************
* Function that does the first bit of architectural setup that affects
* execution in the non-secure address space.
******************************************************************************/
void bl1_arch_setup(void)
{
/* Set the next EL to be AArch64 */
write_scr_el3(read_scr_el3() | SCR_RW_BIT);
}
/*******************************************************************************
* Set the Secure EL1 required architectural state
******************************************************************************/
void bl1_arch_next_el_setup(void)
{
unsigned long next_sctlr;
/* Use the same endianness than the current BL */
next_sctlr = (read_sctlr_el3() & SCTLR_EE_BIT);
/* Set SCTLR Secure EL1 */
next_sctlr |= SCTLR_EL1_RES1;
write_sctlr_el1(next_sctlr);
}

View File

@@ -0,0 +1,99 @@
/*
* Copyright (c) 2015-2017, ARM Limited and Contributors. All rights reserved.
*
* SPDX-License-Identifier: BSD-3-Clause
*/
#include <arch_helpers.h>
#include <assert.h>
#include <context.h>
#include <context_mgmt.h>
#include <debug.h>
#include <platform.h>
#include "../bl1_private.h"
/*
* Following array will be used for context management.
* There are 2 instances, for the Secure and Non-Secure contexts.
*/
static cpu_context_t bl1_cpu_context[2];
/* Following contains the cpu context pointers. */
static void *bl1_cpu_context_ptr[2];
void *cm_get_context(uint32_t security_state)
{
assert(sec_state_is_valid(security_state));
return bl1_cpu_context_ptr[security_state];
}
void cm_set_context(void *context, uint32_t security_state)
{
assert(sec_state_is_valid(security_state));
bl1_cpu_context_ptr[security_state] = context;
}
/*******************************************************************************
* This function prepares the context for Secure/Normal world images.
* Normal world images are transitioned to EL2(if supported) else EL1.
******************************************************************************/
void bl1_prepare_next_image(unsigned int image_id)
{
unsigned int security_state;
image_desc_t *image_desc;
entry_point_info_t *next_bl_ep;
#if CTX_INCLUDE_AARCH32_REGS
/*
* Ensure that the build flag to save AArch32 system registers in CPU
* context is not set for AArch64-only platforms.
*/
if (EL_IMPLEMENTED(1) == EL_IMPL_A64ONLY) {
ERROR("EL1 supports AArch64-only. Please set build flag "
"CTX_INCLUDE_AARCH32_REGS = 0");
panic();
}
#endif
/* Get the image descriptor. */
image_desc = bl1_plat_get_image_desc(image_id);
assert(image_desc);
/* Get the entry point info. */
next_bl_ep = &image_desc->ep_info;
/* Get the image security state. */
security_state = GET_SECURITY_STATE(next_bl_ep->h.attr);
/* Setup the Secure/Non-Secure context if not done already. */
if (!cm_get_context(security_state))
cm_set_context(&bl1_cpu_context[security_state], security_state);
/* Prepare the SPSR for the next BL image. */
if (security_state == SECURE) {
next_bl_ep->spsr = SPSR_64(MODE_EL1, MODE_SP_ELX,
DISABLE_ALL_EXCEPTIONS);
} else {
/* Use EL2 if supported; else use EL1. */
if (EL_IMPLEMENTED(2)) {
next_bl_ep->spsr = SPSR_64(MODE_EL2, MODE_SP_ELX,
DISABLE_ALL_EXCEPTIONS);
} else {
next_bl_ep->spsr = SPSR_64(MODE_EL1, MODE_SP_ELX,
DISABLE_ALL_EXCEPTIONS);
}
}
/* Allow platform to make change */
bl1_plat_set_ep_info(image_id, next_bl_ep);
/* Prepare the context for the next BL image. */
cm_init_my_context(next_bl_ep);
cm_prepare_el3_exit(security_state);
/* Indicate that image is in execution state. */
image_desc->state = IMAGE_STATE_EXECUTED;
print_entry_point_info(next_bl_ep);
}

View File

@@ -0,0 +1,58 @@
/*
* Copyright (c) 2013-2017, ARM Limited and Contributors. All rights reserved.
*
* SPDX-License-Identifier: BSD-3-Clause
*/
#include <arch.h>
#include <el3_common_macros.S>
.globl bl1_entrypoint
/* -----------------------------------------------------
* bl1_entrypoint() is the entry point into the trusted
* firmware code when a cpu is released from warm or
* cold reset.
* -----------------------------------------------------
*/
func bl1_entrypoint
/* ---------------------------------------------------------------------
* If the reset address is programmable then bl1_entrypoint() is
* executed only on the cold boot path. Therefore, we can skip the warm
* boot mailbox mechanism.
* ---------------------------------------------------------------------
*/
el3_entrypoint_common \
_init_sctlr=1 \
_warm_boot_mailbox=!PROGRAMMABLE_RESET_ADDRESS \
_secondary_cold_boot=!COLD_BOOT_SINGLE_CPU \
_init_memory=1 \
_init_c_runtime=1 \
_exception_vectors=bl1_exceptions
/* ---------------------------------------------
* Architectural init. can be generic e.g.
* enabling stack alignment and platform spec-
* ific e.g. MMU & page table setup as per the
* platform memory map. Perform the latter here
* and the former in bl1_main.
* ---------------------------------------------
*/
bl bl1_early_platform_setup
bl bl1_plat_arch_setup
/* --------------------------------------------------
* Initialize platform and jump to our c-entry point
* for this type of reset.
* --------------------------------------------------
*/
bl bl1_main
/* --------------------------------------------------
* Do the transition to next boot image.
* --------------------------------------------------
*/
b el3_exit
endfunc bl1_entrypoint

View File

@@ -0,0 +1,277 @@
/*
* Copyright (c) 2013-2018, ARM Limited and Contributors. All rights reserved.
*
* SPDX-License-Identifier: BSD-3-Clause
*/
#include <arch.h>
#include <asm_macros.S>
#include <bl1.h>
#include <bl_common.h>
#include <context.h>
/* -----------------------------------------------------------------------------
* Very simple stackless exception handlers used by BL1.
* -----------------------------------------------------------------------------
*/
.globl bl1_exceptions
vector_base bl1_exceptions
/* -----------------------------------------------------
* Current EL with SP0 : 0x0 - 0x200
* -----------------------------------------------------
*/
vector_entry SynchronousExceptionSP0
mov x0, #SYNC_EXCEPTION_SP_EL0
bl plat_report_exception
no_ret plat_panic_handler
check_vector_size SynchronousExceptionSP0
vector_entry IrqSP0
mov x0, #IRQ_SP_EL0
bl plat_report_exception
no_ret plat_panic_handler
check_vector_size IrqSP0
vector_entry FiqSP0
mov x0, #FIQ_SP_EL0
bl plat_report_exception
no_ret plat_panic_handler
check_vector_size FiqSP0
vector_entry SErrorSP0
mov x0, #SERROR_SP_EL0
bl plat_report_exception
no_ret plat_panic_handler
check_vector_size SErrorSP0
/* -----------------------------------------------------
* Current EL with SPx: 0x200 - 0x400
* -----------------------------------------------------
*/
vector_entry SynchronousExceptionSPx
mov x0, #SYNC_EXCEPTION_SP_ELX
bl plat_report_exception
no_ret plat_panic_handler
check_vector_size SynchronousExceptionSPx
vector_entry IrqSPx
mov x0, #IRQ_SP_ELX
bl plat_report_exception
no_ret plat_panic_handler
check_vector_size IrqSPx
vector_entry FiqSPx
mov x0, #FIQ_SP_ELX
bl plat_report_exception
no_ret plat_panic_handler
check_vector_size FiqSPx
vector_entry SErrorSPx
mov x0, #SERROR_SP_ELX
bl plat_report_exception
no_ret plat_panic_handler
check_vector_size SErrorSPx
/* -----------------------------------------------------
* Lower EL using AArch64 : 0x400 - 0x600
* -----------------------------------------------------
*/
vector_entry SynchronousExceptionA64
/* Enable the SError interrupt */
msr daifclr, #DAIF_ABT_BIT
str x30, [sp, #CTX_GPREGS_OFFSET + CTX_GPREG_LR]
/* Expect only SMC exceptions */
mrs x30, esr_el3
ubfx x30, x30, #ESR_EC_SHIFT, #ESR_EC_LENGTH
cmp x30, #EC_AARCH64_SMC
b.ne unexpected_sync_exception
b smc_handler64
check_vector_size SynchronousExceptionA64
vector_entry IrqA64
mov x0, #IRQ_AARCH64
bl plat_report_exception
no_ret plat_panic_handler
check_vector_size IrqA64
vector_entry FiqA64
mov x0, #FIQ_AARCH64
bl plat_report_exception
no_ret plat_panic_handler
check_vector_size FiqA64
vector_entry SErrorA64
mov x0, #SERROR_AARCH64
bl plat_report_exception
no_ret plat_panic_handler
check_vector_size SErrorA64
/* -----------------------------------------------------
* Lower EL using AArch32 : 0x600 - 0x800
* -----------------------------------------------------
*/
vector_entry SynchronousExceptionA32
mov x0, #SYNC_EXCEPTION_AARCH32
bl plat_report_exception
no_ret plat_panic_handler
check_vector_size SynchronousExceptionA32
vector_entry IrqA32
mov x0, #IRQ_AARCH32
bl plat_report_exception
no_ret plat_panic_handler
check_vector_size IrqA32
vector_entry FiqA32
mov x0, #FIQ_AARCH32
bl plat_report_exception
no_ret plat_panic_handler
check_vector_size FiqA32
vector_entry SErrorA32
mov x0, #SERROR_AARCH32
bl plat_report_exception
no_ret plat_panic_handler
check_vector_size SErrorA32
func smc_handler64
/* ----------------------------------------------
* Detect if this is a RUN_IMAGE or other SMC.
* ----------------------------------------------
*/
mov x30, #BL1_SMC_RUN_IMAGE
cmp x30, x0
b.ne smc_handler
/* ------------------------------------------------
* Make sure only Secure world reaches here.
* ------------------------------------------------
*/
mrs x30, scr_el3
tst x30, #SCR_NS_BIT
b.ne unexpected_sync_exception
/* ----------------------------------------------
* Handling RUN_IMAGE SMC. First switch back to
* SP_EL0 for the C runtime stack.
* ----------------------------------------------
*/
ldr x30, [sp, #CTX_EL3STATE_OFFSET + CTX_RUNTIME_SP]
msr spsel, #0
mov sp, x30
/* ---------------------------------------------------------------------
* Pass EL3 control to next BL image.
* Here it expects X1 with the address of a entry_point_info_t
* structure describing the next BL image entrypoint.
* ---------------------------------------------------------------------
*/
mov x20, x1
mov x0, x20
bl bl1_print_next_bl_ep_info
ldp x0, x1, [x20, #ENTRY_POINT_INFO_PC_OFFSET]
msr elr_el3, x0
msr spsr_el3, x1
ubfx x0, x1, #MODE_EL_SHIFT, #2
cmp x0, #MODE_EL3
b.ne unexpected_sync_exception
bl disable_mmu_icache_el3
tlbi alle3
dsb ish /* ERET implies ISB, so it is not needed here */
#if SPIN_ON_BL1_EXIT
bl print_debug_loop_message
debug_loop:
b debug_loop
#endif
mov x0, x20
bl bl1_plat_prepare_exit
ldp x6, x7, [x20, #(ENTRY_POINT_INFO_ARGS_OFFSET + 0x30)]
ldp x4, x5, [x20, #(ENTRY_POINT_INFO_ARGS_OFFSET + 0x20)]
ldp x2, x3, [x20, #(ENTRY_POINT_INFO_ARGS_OFFSET + 0x10)]
ldp x0, x1, [x20, #(ENTRY_POINT_INFO_ARGS_OFFSET + 0x0)]
eret
endfunc smc_handler64
unexpected_sync_exception:
mov x0, #SYNC_EXCEPTION_AARCH64
bl plat_report_exception
no_ret plat_panic_handler
/* -----------------------------------------------------
* Save Secure/Normal world context and jump to
* BL1 SMC handler.
* -----------------------------------------------------
*/
smc_handler:
/* -----------------------------------------------------
* Save the GP registers x0-x29.
* TODO: Revisit to store only SMCCC specified registers.
* -----------------------------------------------------
*/
bl save_gp_registers
/* -----------------------------------------------------
* Populate the parameters for the SMC handler. We
* already have x0-x4 in place. x5 will point to a
* cookie (not used now). x6 will point to the context
* structure (SP_EL3) and x7 will contain flags we need
* to pass to the handler.
* -----------------------------------------------------
*/
mov x5, xzr
mov x6, sp
/* -----------------------------------------------------
* Restore the saved C runtime stack value which will
* become the new SP_EL0 i.e. EL3 runtime stack. It was
* saved in the 'cpu_context' structure prior to the last
* ERET from EL3.
* -----------------------------------------------------
*/
ldr x12, [x6, #CTX_EL3STATE_OFFSET + CTX_RUNTIME_SP]
/* ---------------------------------------------
* Switch back to SP_EL0 for the C runtime stack.
* ---------------------------------------------
*/
msr spsel, #0
mov sp, x12
/* -----------------------------------------------------
* Save the SPSR_EL3, ELR_EL3, & SCR_EL3 in case there
* is a world switch during SMC handling.
* -----------------------------------------------------
*/
mrs x16, spsr_el3
mrs x17, elr_el3
mrs x18, scr_el3
stp x16, x17, [x6, #CTX_EL3STATE_OFFSET + CTX_SPSR_EL3]
str x18, [x6, #CTX_EL3STATE_OFFSET + CTX_SCR_EL3]
/* Copy SCR_EL3.NS bit to the flag to indicate caller's security */
bfi x7, x18, #0, #1
/* -----------------------------------------------------
* Go to BL1 SMC handler.
* -----------------------------------------------------
*/
bl bl1_smc_handler
/* -----------------------------------------------------
* Do the transition to next BL image.
* -----------------------------------------------------
*/
b el3_exit

View File

@@ -0,0 +1,182 @@
/*
* Copyright (c) 2013-2018, ARM Limited and Contributors. All rights reserved.
*
* SPDX-License-Identifier: BSD-3-Clause
*/
#include <platform_def.h>
#include <xlat_tables_defs.h>
OUTPUT_FORMAT(PLATFORM_LINKER_FORMAT)
OUTPUT_ARCH(PLATFORM_LINKER_ARCH)
ENTRY(bl1_entrypoint)
MEMORY {
ROM (rx): ORIGIN = BL1_RO_BASE, LENGTH = BL1_RO_LIMIT - BL1_RO_BASE
RAM (rwx): ORIGIN = BL1_RW_BASE, LENGTH = BL1_RW_LIMIT - BL1_RW_BASE
}
SECTIONS
{
. = BL1_RO_BASE;
ASSERT(. == ALIGN(PAGE_SIZE),
"BL1_RO_BASE address is not aligned on a page boundary.")
#if SEPARATE_CODE_AND_RODATA
.text . : {
__TEXT_START__ = .;
*bl1_entrypoint.o(.text*)
*(.text*)
*(.vectors)
. = NEXT(PAGE_SIZE);
__TEXT_END__ = .;
} >ROM
.rodata . : {
__RODATA_START__ = .;
*(.rodata*)
/* Ensure 8-byte alignment for descriptors and ensure inclusion */
. = ALIGN(8);
__PARSER_LIB_DESCS_START__ = .;
KEEP(*(.img_parser_lib_descs))
__PARSER_LIB_DESCS_END__ = .;
/*
* Ensure 8-byte alignment for cpu_ops so that its fields are also
* aligned. Also ensure cpu_ops inclusion.
*/
. = ALIGN(8);
__CPU_OPS_START__ = .;
KEEP(*(cpu_ops))
__CPU_OPS_END__ = .;
/*
* No need to pad out the .rodata section to a page boundary. Next is
* the .data section, which can mapped in ROM with the same memory
* attributes as the .rodata section.
*/
__RODATA_END__ = .;
} >ROM
#else
ro . : {
__RO_START__ = .;
*bl1_entrypoint.o(.text*)
*(.text*)
*(.rodata*)
/* Ensure 8-byte alignment for descriptors and ensure inclusion */
. = ALIGN(8);
__PARSER_LIB_DESCS_START__ = .;
KEEP(*(.img_parser_lib_descs))
__PARSER_LIB_DESCS_END__ = .;
/*
* Ensure 8-byte alignment for cpu_ops so that its fields are also
* aligned. Also ensure cpu_ops inclusion.
*/
. = ALIGN(8);
__CPU_OPS_START__ = .;
KEEP(*(cpu_ops))
__CPU_OPS_END__ = .;
*(.vectors)
__RO_END__ = .;
} >ROM
#endif
ASSERT(__CPU_OPS_END__ > __CPU_OPS_START__,
"cpu_ops not defined for this platform.")
. = BL1_RW_BASE;
ASSERT(BL1_RW_BASE == ALIGN(PAGE_SIZE),
"BL1_RW_BASE address is not aligned on a page boundary.")
/*
* The .data section gets copied from ROM to RAM at runtime.
* Its LMA should be 16-byte aligned to allow efficient copying of 16-bytes
* aligned regions in it.
* Its VMA must be page-aligned as it marks the first read/write page.
*
* It must be placed at a lower address than the stacks if the stack
* protector is enabled. Alternatively, the .data.stack_protector_canary
* section can be placed independently of the main .data section.
*/
.data . : ALIGN(16) {
__DATA_RAM_START__ = .;
*(.data*)
__DATA_RAM_END__ = .;
} >RAM AT>ROM
stacks . (NOLOAD) : {
__STACKS_START__ = .;
*(tzfw_normal_stacks)
__STACKS_END__ = .;
} >RAM
/*
* The .bss section gets initialised to 0 at runtime.
* Its base address should be 16-byte aligned for better performance of the
* zero-initialization code.
*/
.bss : ALIGN(16) {
__BSS_START__ = .;
*(.bss*)
*(COMMON)
__BSS_END__ = .;
} >RAM
/*
* The xlat_table section is for full, aligned page tables (4K).
* Removing them from .bss avoids forcing 4K alignment on
* the .bss section. The tables are initialized to zero by the translation
* tables library.
*/
xlat_table (NOLOAD) : {
*(xlat_table)
} >RAM
#if USE_COHERENT_MEM
/*
* The base address of the coherent memory section must be page-aligned (4K)
* to guarantee that the coherent data are stored on their own pages and
* are not mixed with normal data. This is required to set up the correct
* memory attributes for the coherent data page tables.
*/
coherent_ram (NOLOAD) : ALIGN(PAGE_SIZE) {
__COHERENT_RAM_START__ = .;
*(tzfw_coherent_mem)
__COHERENT_RAM_END_UNALIGNED__ = .;
/*
* Memory page(s) mapped to this section will be marked
* as device memory. No other unexpected data must creep in.
* Ensure the rest of the current memory page is unused.
*/
. = NEXT(PAGE_SIZE);
__COHERENT_RAM_END__ = .;
} >RAM
#endif
__BL1_RAM_START__ = ADDR(.data);
__BL1_RAM_END__ = .;
__DATA_ROM_START__ = LOADADDR(.data);
__DATA_SIZE__ = SIZEOF(.data);
/*
* The .data section is the last PROGBITS section so its end marks the end
* of BL1's actual content in Trusted ROM.
*/
__BL1_ROM_END__ = __DATA_ROM_START__ + __DATA_SIZE__;
ASSERT(__BL1_ROM_END__ <= BL1_RO_LIMIT,
"BL1's ROM content has exceeded its limit.")
__BSS_SIZE__ = SIZEOF(.bss);
#if USE_COHERENT_MEM
__COHERENT_RAM_UNALIGNED_SIZE__ =
__COHERENT_RAM_END_UNALIGNED__ - __COHERENT_RAM_START__;
#endif
ASSERT(. <= BL1_RW_LIMIT, "BL1's RW section has exceeded its limit.")
}

View File

@@ -0,0 +1,29 @@
#
# Copyright (c) 2013-2017, ARM Limited and Contributors. All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
#
BL1_SOURCES += bl1/bl1_main.c \
bl1/${ARCH}/bl1_arch_setup.c \
bl1/${ARCH}/bl1_context_mgmt.c \
bl1/${ARCH}/bl1_entrypoint.S \
bl1/${ARCH}/bl1_exceptions.S \
lib/cpus/${ARCH}/cpu_helpers.S \
lib/cpus/errata_report.c \
lib/el3_runtime/${ARCH}/context_mgmt.c \
plat/common/plat_bl1_common.c \
plat/common/${ARCH}/platform_up_stack.S \
${MBEDTLS_COMMON_SOURCES} \
${MBEDTLS_CRYPTO_SOURCES} \
${MBEDTLS_X509_SOURCES}
ifeq (${ARCH},aarch64)
BL1_SOURCES += lib/el3_runtime/aarch64/context.S
endif
ifeq (${TRUSTED_BOARD_BOOT},1)
BL1_SOURCES += bl1/bl1_fwu.c
endif
BL1_LINKERFILE := bl1/bl1.ld.S

View File

@@ -0,0 +1,755 @@
/*
* Copyright (c) 2015-2018, ARM Limited and Contributors. All rights reserved.
*
* SPDX-License-Identifier: BSD-3-Clause
*/
#include <arch_helpers.h>
#include <assert.h>
#include <auth_mod.h>
#include <bl1.h>
#include <bl_common.h>
#include <context.h>
#include <context_mgmt.h>
#include <debug.h>
#include <errno.h>
#include <platform.h>
#include <platform_def.h>
#include <smccc_helpers.h>
#include <string.h>
#include <utils.h>
#include "bl1_private.h"
/*
* Function declarations.
*/
static int bl1_fwu_image_copy(unsigned int image_id,
uintptr_t image_src,
unsigned int block_size,
unsigned int image_size,
unsigned int flags);
static int bl1_fwu_image_auth(unsigned int image_id,
uintptr_t image_src,
unsigned int image_size,
unsigned int flags);
static int bl1_fwu_image_execute(unsigned int image_id,
void **handle,
unsigned int flags);
static register_t bl1_fwu_image_resume(register_t image_param,
void **handle,
unsigned int flags);
static int bl1_fwu_sec_image_done(void **handle,
unsigned int flags);
static int bl1_fwu_image_reset(unsigned int image_id,
unsigned int flags);
__dead2 static void bl1_fwu_done(void *client_cookie, void *reserved);
/*
* This keeps track of last executed secure image id.
*/
static unsigned int sec_exec_image_id = INVALID_IMAGE_ID;
/* Authentication status of each image. */
extern unsigned int auth_img_flags[MAX_NUMBER_IDS];
/*******************************************************************************
* Top level handler for servicing FWU SMCs.
******************************************************************************/
register_t bl1_fwu_smc_handler(unsigned int smc_fid,
register_t x1,
register_t x2,
register_t x3,
register_t x4,
void *cookie,
void *handle,
unsigned int flags)
{
switch (smc_fid) {
case FWU_SMC_IMAGE_COPY:
SMC_RET1(handle, bl1_fwu_image_copy(x1, x2, x3, x4, flags));
case FWU_SMC_IMAGE_AUTH:
SMC_RET1(handle, bl1_fwu_image_auth(x1, x2, x3, flags));
case FWU_SMC_IMAGE_EXECUTE:
SMC_RET1(handle, bl1_fwu_image_execute(x1, &handle, flags));
case FWU_SMC_IMAGE_RESUME:
SMC_RET1(handle, bl1_fwu_image_resume(x1, &handle, flags));
case FWU_SMC_SEC_IMAGE_DONE:
SMC_RET1(handle, bl1_fwu_sec_image_done(&handle, flags));
case FWU_SMC_IMAGE_RESET:
SMC_RET1(handle, bl1_fwu_image_reset(x1, flags));
case FWU_SMC_UPDATE_DONE:
bl1_fwu_done((void *)x1, NULL);
/* We should never return from bl1_fwu_done() */
break;
default:
assert(0);
break;
}
SMC_RET1(handle, SMC_UNK);
}
/*******************************************************************************
* Utility functions to keep track of the images that are loaded at any time.
******************************************************************************/
#ifdef PLAT_FWU_MAX_SIMULTANEOUS_IMAGES
#define FWU_MAX_SIMULTANEOUS_IMAGES PLAT_FWU_MAX_SIMULTANEOUS_IMAGES
#else
#define FWU_MAX_SIMULTANEOUS_IMAGES 10
#endif
static int bl1_fwu_loaded_ids[FWU_MAX_SIMULTANEOUS_IMAGES] = {
[0 ... FWU_MAX_SIMULTANEOUS_IMAGES-1] = INVALID_IMAGE_ID
};
/*
* Adds an image_id to the bl1_fwu_loaded_ids array.
* Returns 0 on success, 1 on error.
*/
static int bl1_fwu_add_loaded_id(int image_id)
{
int i;
/* Check if the ID is already in the list */
for (i = 0; i < FWU_MAX_SIMULTANEOUS_IMAGES; i++) {
if (bl1_fwu_loaded_ids[i] == image_id)
return 0;
}
/* Find an empty slot */
for (i = 0; i < FWU_MAX_SIMULTANEOUS_IMAGES; i++) {
if (bl1_fwu_loaded_ids[i] == INVALID_IMAGE_ID) {
bl1_fwu_loaded_ids[i] = image_id;
return 0;
}
}
return 1;
}
/*
* Removes an image_id from the bl1_fwu_loaded_ids array.
* Returns 0 on success, 1 on error.
*/
static int bl1_fwu_remove_loaded_id(int image_id)
{
int i;
/* Find the ID */
for (i = 0; i < FWU_MAX_SIMULTANEOUS_IMAGES; i++) {
if (bl1_fwu_loaded_ids[i] == image_id) {
bl1_fwu_loaded_ids[i] = INVALID_IMAGE_ID;
return 0;
}
}
return 1;
}
/*******************************************************************************
* This function checks if the specified image overlaps another image already
* loaded. It returns 0 if there is no overlap, a negative error code otherwise.
******************************************************************************/
static int bl1_fwu_image_check_overlaps(int image_id)
{
const image_desc_t *image_desc, *checked_image_desc;
const image_info_t *info, *checked_info;
uintptr_t image_base, image_end;
uintptr_t checked_image_base, checked_image_end;
checked_image_desc = bl1_plat_get_image_desc(image_id);
checked_info = &checked_image_desc->image_info;
/* Image being checked mustn't be empty. */
assert(checked_info->image_size != 0);
checked_image_base = checked_info->image_base;
checked_image_end = checked_image_base + checked_info->image_size - 1;
/* No need to check for overflows, it's done in bl1_fwu_image_copy(). */
for (int i = 0; i < FWU_MAX_SIMULTANEOUS_IMAGES; i++) {
/* Skip INVALID_IMAGE_IDs and don't check image against itself */
if ((bl1_fwu_loaded_ids[i] == INVALID_IMAGE_ID) ||
(bl1_fwu_loaded_ids[i] == image_id))
continue;
image_desc = bl1_plat_get_image_desc(bl1_fwu_loaded_ids[i]);
/* Only check images that are loaded or being loaded. */
assert (image_desc && image_desc->state != IMAGE_STATE_RESET);
info = &image_desc->image_info;
/* There cannot be overlaps with an empty image. */
if (info->image_size == 0)
continue;
image_base = info->image_base;
image_end = image_base + info->image_size - 1;
/*
* Overflows cannot happen. It is checked in
* bl1_fwu_image_copy() when the image goes from RESET to
* COPYING or COPIED.
*/
assert (image_end > image_base);
/* Check if there are overlaps. */
if (!(image_end < checked_image_base ||
checked_image_end < image_base)) {
VERBOSE("Image with ID %d overlaps existing image with ID %d",
checked_image_desc->image_id, image_desc->image_id);
return -EPERM;
}
}
return 0;
}
/*******************************************************************************
* This function is responsible for copying secure images in AP Secure RAM.
******************************************************************************/
static int bl1_fwu_image_copy(unsigned int image_id,
uintptr_t image_src,
unsigned int block_size,
unsigned int image_size,
unsigned int flags)
{
uintptr_t dest_addr;
unsigned int remaining;
/* Get the image descriptor. */
image_desc_t *image_desc = bl1_plat_get_image_desc(image_id);
if (!image_desc) {
WARN("BL1-FWU: Invalid image ID %u\n", image_id);
return -EPERM;
}
/*
* The request must originate from a non-secure caller and target a
* secure image. Any other scenario is invalid.
*/
if (GET_SECURITY_STATE(flags) == SECURE) {
WARN("BL1-FWU: Copy not allowed from secure world.\n");
return -EPERM;
}
if (GET_SECURITY_STATE(image_desc->ep_info.h.attr) == NON_SECURE) {
WARN("BL1-FWU: Copy not allowed for non-secure images.\n");
return -EPERM;
}
/* Check whether the FWU state machine is in the correct state. */
if ((image_desc->state != IMAGE_STATE_RESET) &&
(image_desc->state != IMAGE_STATE_COPYING)) {
WARN("BL1-FWU: Copy not allowed at this point of the FWU"
" process.\n");
return -EPERM;
}
if ((!image_src) || (!block_size) ||
check_uptr_overflow(image_src, block_size - 1)) {
WARN("BL1-FWU: Copy not allowed due to invalid image source"
" or block size\n");
return -ENOMEM;
}
if (image_desc->state == IMAGE_STATE_COPYING) {
/*
* There must have been at least 1 copy operation for this image
* previously.
*/
assert(image_desc->copied_size != 0);
/*
* The image size must have been recorded in the 1st copy
* operation.
*/
image_size = image_desc->image_info.image_size;
assert(image_size != 0);
assert(image_desc->copied_size < image_size);
INFO("BL1-FWU: Continuing image copy in blocks\n");
} else { /* image_desc->state == IMAGE_STATE_RESET */
INFO("BL1-FWU: Initial call to copy an image\n");
/*
* image_size is relevant only for the 1st copy request, it is
* then ignored for subsequent calls for this image.
*/
if (!image_size) {
WARN("BL1-FWU: Copy not allowed due to invalid image"
" size\n");
return -ENOMEM;
}
#if LOAD_IMAGE_V2
/* Check that the image size to load is within limit */
if (image_size > image_desc->image_info.image_max_size) {
WARN("BL1-FWU: Image size out of bounds\n");
return -ENOMEM;
}
#else
/*
* Check the image will fit into the free trusted RAM after BL1
* load.
*/
const meminfo_t *mem_layout = bl1_plat_sec_mem_layout();
if (!is_mem_free(mem_layout->free_base, mem_layout->free_size,
image_desc->image_info.image_base,
image_size)) {
WARN("BL1-FWU: Copy not allowed due to insufficient"
" resources.\n");
return -ENOMEM;
}
#endif
/* Save the given image size. */
image_desc->image_info.image_size = image_size;
/* Make sure the image doesn't overlap other images. */
if (bl1_fwu_image_check_overlaps(image_id)) {
image_desc->image_info.image_size = 0;
WARN("BL1-FWU: This image overlaps another one\n");
return -EPERM;
}
/*
* copied_size must be explicitly initialized here because the
* FWU code doesn't necessarily do it when it resets the state
* machine.
*/
image_desc->copied_size = 0;
}
/*
* If the given block size is more than the total image size
* then clip the former to the latter.
*/
remaining = image_size - image_desc->copied_size;
if (block_size > remaining) {
WARN("BL1-FWU: Block size is too big, clipping it.\n");
block_size = remaining;
}
/* Make sure the source image is mapped in memory. */
if (bl1_plat_mem_check(image_src, block_size, flags)) {
WARN("BL1-FWU: Source image is not mapped.\n");
return -ENOMEM;
}
if (bl1_fwu_add_loaded_id(image_id)) {
WARN("BL1-FWU: Too many images loaded at the same time.\n");
return -ENOMEM;
}
/* Allow the platform to handle pre-image load before copying */
if (image_desc->state == IMAGE_STATE_RESET) {
if (bl1_plat_handle_pre_image_load(image_id) != 0) {
ERROR("BL1-FWU: Failure in pre-image load of image id %d\n",
image_id);
return -EPERM;
}
}
/* Everything looks sane. Go ahead and copy the block of data. */
dest_addr = image_desc->image_info.image_base + image_desc->copied_size;
memcpy((void *) dest_addr, (const void *) image_src, block_size);
flush_dcache_range(dest_addr, block_size);
image_desc->copied_size += block_size;
image_desc->state = (block_size == remaining) ?
IMAGE_STATE_COPIED : IMAGE_STATE_COPYING;
INFO("BL1-FWU: Copy operation successful.\n");
return 0;
}
/*******************************************************************************
* This function is responsible for authenticating Normal/Secure images.
******************************************************************************/
static int bl1_fwu_image_auth(unsigned int image_id,
uintptr_t image_src,
unsigned int image_size,
unsigned int flags)
{
int result;
uintptr_t base_addr;
unsigned int total_size;
/* Get the image descriptor. */
image_desc_t *image_desc = bl1_plat_get_image_desc(image_id);
if (!image_desc)
return -EPERM;
if (GET_SECURITY_STATE(flags) == SECURE) {
if (image_desc->state != IMAGE_STATE_RESET) {
WARN("BL1-FWU: Authentication from secure world "
"while in invalid state\n");
return -EPERM;
}
} else {
if (GET_SECURITY_STATE(image_desc->ep_info.h.attr) == SECURE) {
if (image_desc->state != IMAGE_STATE_COPIED) {
WARN("BL1-FWU: Authentication of secure image "
"from non-secure world while not in copied state\n");
return -EPERM;
}
} else {
if (image_desc->state != IMAGE_STATE_RESET) {
WARN("BL1-FWU: Authentication of non-secure image "
"from non-secure world while in invalid state\n");
return -EPERM;
}
}
}
if (image_desc->state == IMAGE_STATE_COPIED) {
/*
* Image is in COPIED state.
* Use the stored address and size.
*/
base_addr = image_desc->image_info.image_base;
total_size = image_desc->image_info.image_size;
} else {
if ((!image_src) || (!image_size) ||
check_uptr_overflow(image_src, image_size - 1)) {
WARN("BL1-FWU: Auth not allowed due to invalid"
" image source/size\n");
return -ENOMEM;
}
/*
* Image is in RESET state.
* Check the parameters and authenticate the source image in place.
*/
if (bl1_plat_mem_check(image_src, image_size, \
image_desc->ep_info.h.attr)) {
WARN("BL1-FWU: Authentication arguments source/size not mapped\n");
return -ENOMEM;
}
if (bl1_fwu_add_loaded_id(image_id)) {
WARN("BL1-FWU: Too many images loaded at the same time.\n");
return -ENOMEM;
}
base_addr = image_src;
total_size = image_size;
/* Update the image size in the descriptor. */
image_desc->image_info.image_size = total_size;
}
/*
* Authenticate the image.
*/
INFO("BL1-FWU: Authenticating image_id:%d\n", image_id);
result = auth_mod_verify_img(image_id, (void *)base_addr, total_size);
if (result != 0) {
WARN("BL1-FWU: Authentication Failed err=%d\n", result);
/*
* Authentication has failed.
* Clear the memory if the image was copied.
* This is to prevent an attack where this contains
* some malicious code that can somehow be executed later.
*/
if (image_desc->state == IMAGE_STATE_COPIED) {
/* Clear the memory.*/
zero_normalmem((void *)base_addr, total_size);
flush_dcache_range(base_addr, total_size);
/* Indicate that image can be copied again*/
image_desc->state = IMAGE_STATE_RESET;
}
/*
* Even if this fails it's ok because the ID isn't in the array.
* The image cannot be in RESET state here, it is checked at the
* beginning of the function.
*/
bl1_fwu_remove_loaded_id(image_id);
return -EAUTH;
}
/* Indicate that image is in authenticated state. */
image_desc->state = IMAGE_STATE_AUTHENTICATED;
/* Allow the platform to handle post-image load */
result = bl1_plat_handle_post_image_load(image_id);
if (result != 0) {
ERROR("BL1-FWU: Failure %d in post-image load of image id %d\n",
result, image_id);
/*
* Panic here as the platform handling of post-image load is
* not correct.
*/
plat_error_handler(result);
}
/*
* Flush image_info to memory so that other
* secure world images can see changes.
*/
flush_dcache_range((unsigned long)&image_desc->image_info,
sizeof(image_info_t));
INFO("BL1-FWU: Authentication was successful\n");
return 0;
}
/*******************************************************************************
* This function is responsible for executing Secure images.
******************************************************************************/
static int bl1_fwu_image_execute(unsigned int image_id,
void **handle,
unsigned int flags)
{
/* Get the image descriptor. */
image_desc_t *image_desc = bl1_plat_get_image_desc(image_id);
/*
* Execution is NOT allowed if:
* image_id is invalid OR
* Caller is from Secure world OR
* Image is Non-Secure OR
* Image is Non-Executable OR
* Image is NOT in AUTHENTICATED state.
*/
if ((!image_desc) ||
(GET_SECURITY_STATE(flags) == SECURE) ||
(GET_SECURITY_STATE(image_desc->ep_info.h.attr) == NON_SECURE) ||
(EP_GET_EXE(image_desc->ep_info.h.attr) == NON_EXECUTABLE) ||
(image_desc->state != IMAGE_STATE_AUTHENTICATED)) {
WARN("BL1-FWU: Execution not allowed due to invalid state/args\n");
return -EPERM;
}
INFO("BL1-FWU: Executing Secure image\n");
#ifdef AARCH64
/* Save NS-EL1 system registers. */
cm_el1_sysregs_context_save(NON_SECURE);
#endif
/* Prepare the image for execution. */
bl1_prepare_next_image(image_id);
/* Update the secure image id. */
sec_exec_image_id = image_id;
#ifdef AARCH64
*handle = cm_get_context(SECURE);
#else
*handle = smc_get_ctx(SECURE);
#endif
return 0;
}
/*******************************************************************************
* This function is responsible for resuming execution in the other security
* world
******************************************************************************/
static register_t bl1_fwu_image_resume(register_t image_param,
void **handle,
unsigned int flags)
{
image_desc_t *image_desc;
unsigned int resume_sec_state;
unsigned int caller_sec_state = GET_SECURITY_STATE(flags);
/* Get the image descriptor for last executed secure image id. */
image_desc = bl1_plat_get_image_desc(sec_exec_image_id);
if (caller_sec_state == NON_SECURE) {
if (!image_desc) {
WARN("BL1-FWU: Resume not allowed due to no available"
"secure image\n");
return -EPERM;
}
} else {
/* image_desc must be valid for secure world callers */
assert(image_desc);
}
assert(GET_SECURITY_STATE(image_desc->ep_info.h.attr) == SECURE);
assert(EP_GET_EXE(image_desc->ep_info.h.attr) == EXECUTABLE);
if (caller_sec_state == SECURE) {
assert(image_desc->state == IMAGE_STATE_EXECUTED);
/* Update the flags. */
image_desc->state = IMAGE_STATE_INTERRUPTED;
resume_sec_state = NON_SECURE;
} else {
assert(image_desc->state == IMAGE_STATE_INTERRUPTED);
/* Update the flags. */
image_desc->state = IMAGE_STATE_EXECUTED;
resume_sec_state = SECURE;
}
INFO("BL1-FWU: Resuming %s world context\n",
(resume_sec_state == SECURE) ? "secure" : "normal");
#ifdef AARCH64
/* Save the EL1 system registers of calling world. */
cm_el1_sysregs_context_save(caller_sec_state);
/* Restore the EL1 system registers of resuming world. */
cm_el1_sysregs_context_restore(resume_sec_state);
/* Update the next context. */
cm_set_next_eret_context(resume_sec_state);
*handle = cm_get_context(resume_sec_state);
#else
/* Update the next context. */
cm_set_next_context(cm_get_context(resume_sec_state));
/* Prepare the smc context for the next BL image. */
smc_set_next_ctx(resume_sec_state);
*handle = smc_get_ctx(resume_sec_state);
#endif
return image_param;
}
/*******************************************************************************
* This function is responsible for resuming normal world context.
******************************************************************************/
static int bl1_fwu_sec_image_done(void **handle, unsigned int flags)
{
image_desc_t *image_desc;
/* Make sure caller is from the secure world */
if (GET_SECURITY_STATE(flags) == NON_SECURE) {
WARN("BL1-FWU: Image done not allowed from normal world\n");
return -EPERM;
}
/* Get the image descriptor for last executed secure image id */
image_desc = bl1_plat_get_image_desc(sec_exec_image_id);
/* image_desc must correspond to a valid secure executing image */
assert(image_desc);
assert(GET_SECURITY_STATE(image_desc->ep_info.h.attr) == SECURE);
assert(EP_GET_EXE(image_desc->ep_info.h.attr) == EXECUTABLE);
assert(image_desc->state == IMAGE_STATE_EXECUTED);
#if ENABLE_ASSERTIONS
int rc = bl1_fwu_remove_loaded_id(sec_exec_image_id);
assert(rc == 0);
#else
bl1_fwu_remove_loaded_id(sec_exec_image_id);
#endif
/* Update the flags. */
image_desc->state = IMAGE_STATE_RESET;
sec_exec_image_id = INVALID_IMAGE_ID;
INFO("BL1-FWU: Resuming Normal world context\n");
#ifdef AARCH64
/*
* Secure world is done so no need to save the context.
* Just restore the Non-Secure context.
*/
cm_el1_sysregs_context_restore(NON_SECURE);
/* Update the next context. */
cm_set_next_eret_context(NON_SECURE);
*handle = cm_get_context(NON_SECURE);
#else
/* Update the next context. */
cm_set_next_context(cm_get_context(NON_SECURE));
/* Prepare the smc context for the next BL image. */
smc_set_next_ctx(NON_SECURE);
*handle = smc_get_ctx(NON_SECURE);
#endif
return 0;
}
/*******************************************************************************
* This function provides the opportunity for users to perform any
* platform specific handling after the Firmware update is done.
******************************************************************************/
__dead2 static void bl1_fwu_done(void *client_cookie, void *reserved)
{
NOTICE("BL1-FWU: *******FWU Process Completed*******\n");
/*
* Call platform done function.
*/
bl1_plat_fwu_done(client_cookie, reserved);
assert(0);
}
/*******************************************************************************
* This function resets an image to IMAGE_STATE_RESET. It fails if the image is
* being executed.
******************************************************************************/
static int bl1_fwu_image_reset(unsigned int image_id, unsigned int flags)
{
image_desc_t *image_desc = bl1_plat_get_image_desc(image_id);
if ((!image_desc) || (GET_SECURITY_STATE(flags) == SECURE)) {
WARN("BL1-FWU: Reset not allowed due to invalid args\n");
return -EPERM;
}
switch (image_desc->state) {
case IMAGE_STATE_RESET:
/* Nothing to do. */
break;
case IMAGE_STATE_INTERRUPTED:
case IMAGE_STATE_AUTHENTICATED:
case IMAGE_STATE_COPIED:
case IMAGE_STATE_COPYING:
if (bl1_fwu_remove_loaded_id(image_id)) {
WARN("BL1-FWU: Image reset couldn't find the image ID\n");
return -EPERM;
}
if (image_desc->copied_size) {
/* Clear the memory if the image is copied */
assert(GET_SECURITY_STATE(image_desc->ep_info.h.attr) == SECURE);
zero_normalmem((void *)image_desc->image_info.image_base,
image_desc->copied_size);
flush_dcache_range(image_desc->image_info.image_base,
image_desc->copied_size);
}
/* Reset status variables */
image_desc->copied_size = 0;
image_desc->image_info.image_size = 0;
image_desc->state = IMAGE_STATE_RESET;
/* Clear authentication state */
auth_img_flags[image_id] = 0;
break;
case IMAGE_STATE_EXECUTED:
default:
assert(0);
break;
}
return 0;
}

View File

@@ -0,0 +1,302 @@
/*
* Copyright (c) 2013-2018, ARM Limited and Contributors. All rights reserved.
*
* SPDX-License-Identifier: BSD-3-Clause
*/
#include <arch.h>
#include <arch_helpers.h>
#include <assert.h>
#include <auth_mod.h>
#include <bl1.h>
#include <bl_common.h>
#include <console.h>
#include <debug.h>
#include <errata_report.h>
#include <platform.h>
#include <platform_def.h>
#include <smccc_helpers.h>
#include <utils.h>
#include <uuid.h>
#include "bl1_private.h"
/* BL1 Service UUID */
DEFINE_SVC_UUID(bl1_svc_uid,
0xfd3967d4, 0x72cb, 0x4d9a, 0xb5, 0x75,
0x67, 0x15, 0xd6, 0xf4, 0xbb, 0x4a);
static void bl1_load_bl2(void);
/*******************************************************************************
* Helper utility to calculate the BL2 memory layout taking into consideration
* the BL1 RW data assuming that it is at the top of the memory layout.
******************************************************************************/
void bl1_calc_bl2_mem_layout(const meminfo_t *bl1_mem_layout,
meminfo_t *bl2_mem_layout)
{
assert(bl1_mem_layout != NULL);
assert(bl2_mem_layout != NULL);
#if LOAD_IMAGE_V2
/*
* Remove BL1 RW data from the scope of memory visible to BL2.
* This is assuming BL1 RW data is at the top of bl1_mem_layout.
*/
assert(BL1_RW_BASE > bl1_mem_layout->total_base);
bl2_mem_layout->total_base = bl1_mem_layout->total_base;
bl2_mem_layout->total_size = BL1_RW_BASE - bl1_mem_layout->total_base;
#else
/* Check that BL1's memory is lying outside of the free memory */
assert((BL1_RAM_LIMIT <= bl1_mem_layout->free_base) ||
(BL1_RAM_BASE >= bl1_mem_layout->free_base +
bl1_mem_layout->free_size));
/* Remove BL1 RW data from the scope of memory visible to BL2 */
*bl2_mem_layout = *bl1_mem_layout;
reserve_mem(&bl2_mem_layout->total_base,
&bl2_mem_layout->total_size,
BL1_RAM_BASE,
BL1_RAM_LIMIT - BL1_RAM_BASE);
#endif /* LOAD_IMAGE_V2 */
flush_dcache_range((unsigned long)bl2_mem_layout, sizeof(meminfo_t));
}
#if !ERROR_DEPRECATED
/*******************************************************************************
* Compatibility default implementation for deprecated API. This has a weak
* definition. Platform specific code can override it if it wishes to.
******************************************************************************/
#pragma weak bl1_init_bl2_mem_layout
/*******************************************************************************
* Function that takes a memory layout into which BL2 has been loaded and
* populates a new memory layout for BL2 that ensures that BL1's data sections
* resident in secure RAM are not visible to BL2.
******************************************************************************/
void bl1_init_bl2_mem_layout(const meminfo_t *bl1_mem_layout,
meminfo_t *bl2_mem_layout)
{
bl1_calc_bl2_mem_layout(bl1_mem_layout, bl2_mem_layout);
}
#endif
/*******************************************************************************
* Function to perform late architectural and platform specific initialization.
* It also queries the platform to load and run next BL image. Only called
* by the primary cpu after a cold boot.
******************************************************************************/
void bl1_main(void)
{
unsigned int image_id;
/* Announce our arrival */
NOTICE(FIRMWARE_WELCOME_STR);
NOTICE("BL1: %s\n", version_string);
NOTICE("BL1: %s\n", build_message);
INFO("BL1: RAM %p - %p\n", (void *)BL1_RAM_BASE,
(void *)BL1_RAM_LIMIT);
print_errata_status();
#if ENABLE_ASSERTIONS
u_register_t val;
/*
* Ensure that MMU/Caches and coherency are turned on
*/
#ifdef AARCH32
val = read_sctlr();
#else
val = read_sctlr_el3();
#endif
assert(val & SCTLR_M_BIT);
assert(val & SCTLR_C_BIT);
assert(val & SCTLR_I_BIT);
/*
* Check that Cache Writeback Granule (CWG) in CTR_EL0 matches the
* provided platform value
*/
val = (read_ctr_el0() >> CTR_CWG_SHIFT) & CTR_CWG_MASK;
/*
* If CWG is zero, then no CWG information is available but we can
* at least check the platform value is less than the architectural
* maximum.
*/
if (val != 0)
assert(CACHE_WRITEBACK_GRANULE == SIZE_FROM_LOG2_WORDS(val));
else
assert(CACHE_WRITEBACK_GRANULE <= MAX_CACHE_LINE_SIZE);
#endif /* ENABLE_ASSERTIONS */
/* Perform remaining generic architectural setup from EL3 */
bl1_arch_setup();
#if TRUSTED_BOARD_BOOT
/* Initialize authentication module */
auth_mod_init();
#endif /* TRUSTED_BOARD_BOOT */
/* Perform platform setup in BL1. */
bl1_platform_setup();
/* Get the image id of next image to load and run. */
image_id = bl1_plat_get_next_image_id();
/*
* We currently interpret any image id other than
* BL2_IMAGE_ID as the start of firmware update.
*/
if (image_id == BL2_IMAGE_ID)
bl1_load_bl2();
else
NOTICE("BL1-FWU: *******FWU Process Started*******\n");
bl1_prepare_next_image(image_id);
console_flush();
}
/*******************************************************************************
* This function locates and loads the BL2 raw binary image in the trusted SRAM.
* Called by the primary cpu after a cold boot.
* TODO: Add support for alternative image load mechanism e.g using virtio/elf
* loader etc.
******************************************************************************/
static void bl1_load_bl2(void)
{
image_desc_t *image_desc;
image_info_t *image_info;
int err;
/* Get the image descriptor */
image_desc = bl1_plat_get_image_desc(BL2_IMAGE_ID);
assert(image_desc);
/* Get the image info */
image_info = &image_desc->image_info;
INFO("BL1: Loading BL2\n");
err = bl1_plat_handle_pre_image_load(BL2_IMAGE_ID);
if (err) {
ERROR("Failure in pre image load handling of BL2 (%d)\n", err);
plat_error_handler(err);
}
#if LOAD_IMAGE_V2
err = load_auth_image(BL2_IMAGE_ID, image_info);
#else
entry_point_info_t *ep_info;
meminfo_t *bl1_tzram_layout;
/* Get the entry point info */
ep_info = &image_desc->ep_info;
/* Find out how much free trusted ram remains after BL1 load */
bl1_tzram_layout = bl1_plat_sec_mem_layout();
/* Load the BL2 image */
err = load_auth_image(bl1_tzram_layout,
BL2_IMAGE_ID,
image_info->image_base,
image_info,
ep_info);
#endif /* LOAD_IMAGE_V2 */
if (err) {
ERROR("Failed to load BL2 firmware.\n");
plat_error_handler(err);
}
/* Allow platform to handle image information. */
err = bl1_plat_handle_post_image_load(BL2_IMAGE_ID);
if (err) {
ERROR("Failure in post image load handling of BL2 (%d)\n", err);
plat_error_handler(err);
}
NOTICE("BL1: Booting BL2\n");
}
/*******************************************************************************
* Function called just before handing over to the next BL to inform the user
* about the boot progress. In debug mode, also print details about the BL
* image's execution context.
******************************************************************************/
void bl1_print_next_bl_ep_info(const entry_point_info_t *bl_ep_info)
{
#ifdef AARCH32
NOTICE("BL1: Booting BL32\n");
#else
NOTICE("BL1: Booting BL31\n");
#endif /* AARCH32 */
print_entry_point_info(bl_ep_info);
}
#if SPIN_ON_BL1_EXIT
void print_debug_loop_message(void)
{
NOTICE("BL1: Debug loop, spinning forever\n");
NOTICE("BL1: Please connect the debugger to continue\n");
}
#endif
/*******************************************************************************
* Top level handler for servicing BL1 SMCs.
******************************************************************************/
register_t bl1_smc_handler(unsigned int smc_fid,
register_t x1,
register_t x2,
register_t x3,
register_t x4,
void *cookie,
void *handle,
unsigned int flags)
{
#if TRUSTED_BOARD_BOOT
/*
* Dispatch FWU calls to FWU SMC handler and return its return
* value
*/
if (is_fwu_fid(smc_fid)) {
return bl1_fwu_smc_handler(smc_fid, x1, x2, x3, x4, cookie,
handle, flags);
}
#endif
switch (smc_fid) {
case BL1_SMC_CALL_COUNT:
SMC_RET1(handle, BL1_NUM_SMC_CALLS);
case BL1_SMC_UID:
SMC_UUID_RET(handle, bl1_svc_uid);
case BL1_SMC_VERSION:
SMC_RET1(handle, BL1_SMC_MAJOR_VER | BL1_SMC_MINOR_VER);
default:
break;
}
WARN("Unimplemented BL1 SMC Call: 0x%x \n", smc_fid);
SMC_RET1(handle, SMC_UNK);
}
/*******************************************************************************
* BL1 SMC wrapper. This function is only used in AArch32 mode to ensure ABI
* compliance when invoking bl1_smc_handler.
******************************************************************************/
register_t bl1_smc_wrapper(uint32_t smc_fid,
void *cookie,
void *handle,
unsigned int flags)
{
register_t x1, x2, x3, x4;
assert(handle);
get_smc_params_from_ctx(handle, x1, x2, x3, x4);
return bl1_smc_handler(smc_fid, x1, x2, x3, x4, cookie, handle, flags);
}

View File

@@ -0,0 +1,38 @@
/*
* Copyright (c) 2013-2018, ARM Limited and Contributors. All rights reserved.
*
* SPDX-License-Identifier: BSD-3-Clause
*/
#ifndef __BL1_PRIVATE_H__
#define __BL1_PRIVATE_H__
#include <types.h>
#include <utils_def.h>
/*******************************************************************************
* Declarations of linker defined symbols which will tell us where BL1 lives
* in Trusted ROM and RAM
******************************************************************************/
IMPORT_SYM(uintptr_t, __BL1_ROM_END__, BL1_ROM_END);
IMPORT_SYM(uintptr_t, __BL1_RAM_START__, BL1_RAM_BASE);
IMPORT_SYM(uintptr_t, __BL1_RAM_END__, BL1_RAM_LIMIT);
/******************************************
* Function prototypes
*****************************************/
void bl1_arch_setup(void);
void bl1_arch_next_el_setup(void);
void bl1_prepare_next_image(unsigned int image_id);
register_t bl1_fwu_smc_handler(unsigned int smc_fid,
register_t x1,
register_t x2,
register_t x3,
register_t x4,
void *cookie,
void *handle,
unsigned int flags);
#endif /* __BL1_PRIVATE_H__ */

View File

@@ -0,0 +1,71 @@
/*
* Copyright (c) 2015-2018, ARM Limited and Contributors. All rights reserved.
*
* SPDX-License-Identifier: BSD-3-Clause
*/
#include <bl1.h>
#include <bl_common.h>
#include <platform_def.h>
#include <tbbr/tbbr_img_desc.h>
image_desc_t bl1_tbbr_image_descs[] = {
{
.image_id = FWU_CERT_ID,
SET_STATIC_PARAM_HEAD(image_info, PARAM_IMAGE_BINARY,
VERSION_1, image_info_t, 0),
.image_info.image_base = BL2_BASE,
#if LOAD_IMAGE_V2
.image_info.image_max_size = BL2_LIMIT - BL2_BASE,
#endif
SET_STATIC_PARAM_HEAD(ep_info, PARAM_IMAGE_BINARY,
VERSION_1, entry_point_info_t, SECURE),
},
#if NS_BL1U_BASE
{
.image_id = NS_BL1U_IMAGE_ID,
SET_STATIC_PARAM_HEAD(ep_info, PARAM_EP,
VERSION_1, entry_point_info_t, NON_SECURE | EXECUTABLE),
.ep_info.pc = NS_BL1U_BASE,
},
#endif
#if SCP_BL2U_BASE
{
.image_id = SCP_BL2U_IMAGE_ID,
SET_STATIC_PARAM_HEAD(image_info, PARAM_IMAGE_BINARY,
VERSION_1, image_info_t, 0),
.image_info.image_base = SCP_BL2U_BASE,
#if LOAD_IMAGE_V2
.image_info.image_max_size = SCP_BL2U_LIMIT - SCP_BL2U_BASE,
#endif
SET_STATIC_PARAM_HEAD(ep_info, PARAM_IMAGE_BINARY,
VERSION_1, entry_point_info_t, SECURE),
},
#endif
#if BL2U_BASE
{
.image_id = BL2U_IMAGE_ID,
SET_STATIC_PARAM_HEAD(image_info, PARAM_EP,
VERSION_1, image_info_t, 0),
.image_info.image_base = BL2U_BASE,
#if LOAD_IMAGE_V2
.image_info.image_max_size = BL2U_LIMIT - BL2U_BASE,
#endif
SET_STATIC_PARAM_HEAD(ep_info, PARAM_EP,
VERSION_1, entry_point_info_t, SECURE | EXECUTABLE),
.ep_info.pc = BL2U_BASE,
},
#endif
#if NS_BL2U_BASE
{
.image_id = NS_BL2U_IMAGE_ID,
SET_STATIC_PARAM_HEAD(ep_info, PARAM_EP,
VERSION_1, entry_point_info_t, NON_SECURE),
},
#endif
BL2_IMAGE_DESC,
{
.image_id = INVALID_IMAGE_ID,
}
};

View File

@@ -0,0 +1,15 @@
/*
* Copyright (c) 2016, ARM Limited and Contributors. All rights reserved.
*
* SPDX-License-Identifier: BSD-3-Clause
*/
/*******************************************************************************
* Place holder function to perform any Secure SVC specific architectural
* setup. At the moment there is nothing to do.
******************************************************************************/
void bl2_arch_setup(void)
{
}

View File

@@ -0,0 +1,89 @@
/*
* Copyright (c) 2017, ARM Limited and Contributors. All rights reserved.
*
* SPDX-License-Identifier: BSD-3-Clause
*/
#include <arch.h>
#include <asm_macros.S>
#include <bl_common.h>
#include <el3_common_macros.S>
.globl bl2_entrypoint
.globl bl2_run_next_image
func bl2_entrypoint
/* Save arguments x0-x3 from previous Boot loader */
mov r9, r0
mov r10, r1
mov r11, r2
mov r12, r3
el3_entrypoint_common \
_init_sctlr=1 \
_warm_boot_mailbox=!PROGRAMMABLE_RESET_ADDRESS \
_secondary_cold_boot=!COLD_BOOT_SINGLE_CPU \
_init_memory=1 \
_init_c_runtime=1 \
_exception_vectors=bl2_vector_table
/*
* Restore parameters of boot rom
*/
mov r0, r9
mov r1, r10
mov r2, r11
mov r3, r12
bl bl2_el3_early_platform_setup
bl bl2_el3_plat_arch_setup
/* ---------------------------------------------
* Jump to main function.
* ---------------------------------------------
*/
bl bl2_main
/* ---------------------------------------------
* Should never reach this point.
* ---------------------------------------------
*/
no_ret plat_panic_handler
endfunc bl2_entrypoint
func bl2_run_next_image
mov r8,r0
/*
* MMU needs to be disabled because both BL2 and BL32 execute
* in PL1, and therefore share the same address space.
* BL32 will initialize the address space according to its
* own requirement.
*/
bl disable_mmu_icache_secure
stcopr r0, TLBIALL
dsb sy
isb
mov r0, r8
bl bl2_el3_plat_prepare_exit
/*
* Extract PC and SPSR based on struct `entry_point_info_t`
* and load it in LR and SPSR registers respectively.
*/
ldr lr, [r8, #ENTRY_POINT_INFO_PC_OFFSET]
ldr r1, [r8, #(ENTRY_POINT_INFO_PC_OFFSET + 4)]
msr spsr, r1
/* Some BL32 stages expect lr_svc to provide the BL33 entry address */
cps #MODE32_svc
ldr lr, [r8, #ENTRY_POINT_INFO_LR_SVC_OFFSET]
cps #MODE32_mon
add r8, r8, #ENTRY_POINT_INFO_ARGS_OFFSET
ldm r8, {r0, r1, r2, r3}
eret
endfunc bl2_run_next_image

View File

@@ -0,0 +1,21 @@
/*
* Copyright (c) 2017, ARM Limited and Contributors. All rights reserved.
*
* SPDX-License-Identifier: BSD-3-Clause
*/
#include <arch.h>
#include <asm_macros.S>
#include <bl_common.h>
.globl bl2_vector_table
vector_base bl2_vector_table
b bl2_entrypoint
b report_exception /* Undef */
b report_exception /* SVC call */
b report_exception /* Prefetch abort */
b report_exception /* Data abort */
b report_exception /* Reserved */
b report_exception /* IRQ */
b report_exception /* FIQ */

View File

@@ -0,0 +1,135 @@
/*
* Copyright (c) 2016-2018, ARM Limited and Contributors. All rights reserved.
*
* SPDX-License-Identifier: BSD-3-Clause
*/
#include <arch.h>
#include <asm_macros.S>
#include <bl_common.h>
.globl bl2_vector_table
.globl bl2_entrypoint
vector_base bl2_vector_table
b bl2_entrypoint
b report_exception /* Undef */
b report_exception /* SVC call */
b report_exception /* Prefetch abort */
b report_exception /* Data abort */
b report_exception /* Reserved */
b report_exception /* IRQ */
b report_exception /* FIQ */
func bl2_entrypoint
/*---------------------------------------------
* Save arguments x0 - x3 from BL1 for future
* use.
* ---------------------------------------------
*/
mov r9, r0
mov r10, r1
mov r11, r2
mov r12, r3
/* ---------------------------------------------
* Set the exception vector to something sane.
* ---------------------------------------------
*/
ldr r0, =bl2_vector_table
stcopr r0, VBAR
isb
/* -----------------------------------------------------
* Enable the instruction cache
* -----------------------------------------------------
*/
ldcopr r0, SCTLR
orr r0, r0, #SCTLR_I_BIT
stcopr r0, SCTLR
isb
/* ---------------------------------------------
* Since BL2 executes after BL1, it is assumed
* here that BL1 has already has done the
* necessary register initializations.
* ---------------------------------------------
*/
/* ---------------------------------------------
* Invalidate the RW memory used by the BL2
* image. This includes the data and NOBITS
* sections. This is done to safeguard against
* possible corruption of this memory by dirty
* cache lines in a system cache as a result of
* use by an earlier boot loader stage.
* ---------------------------------------------
*/
ldr r0, =__RW_START__
ldr r1, =__RW_END__
sub r1, r1, r0
bl inv_dcache_range
/* ---------------------------------------------
* Zero out NOBITS sections. There are 2 of them:
* - the .bss section;
* - the coherent memory section.
* ---------------------------------------------
*/
ldr r0, =__BSS_START__
ldr r1, =__BSS_SIZE__
bl zeromem
#if USE_COHERENT_MEM
ldr r0, =__COHERENT_RAM_START__
ldr r1, =__COHERENT_RAM_UNALIGNED_SIZE__
bl zeromem
#endif
/* --------------------------------------------
* Allocate a stack whose memory will be marked
* as Normal-IS-WBWA when the MMU is enabled.
* There is no risk of reading stale stack
* memory after enabling the MMU as only the
* primary cpu is running at the moment.
* --------------------------------------------
*/
bl plat_set_my_stack
/* ---------------------------------------------
* Initialize the stack protector canary before
* any C code is called.
* ---------------------------------------------
*/
#if STACK_PROTECTOR_ENABLED
bl update_stack_protector_canary
#endif
/* ---------------------------------------------
* Perform early platform setup & platform
* specific early arch. setup e.g. mmu setup
* ---------------------------------------------
*/
mov r0, r9
mov r1, r10
mov r2, r11
mov r3, r12
bl bl2_early_platform_setup2
bl bl2_plat_arch_setup
/* ---------------------------------------------
* Jump to main function.
* ---------------------------------------------
*/
bl bl2_main
/* ---------------------------------------------
* Should never reach this point.
* ---------------------------------------------
*/
no_ret plat_panic_handler
endfunc bl2_entrypoint

View File

@@ -0,0 +1,19 @@
/*
* Copyright (c) 2013-2018, ARM Limited and Contributors. All rights reserved.
*
* SPDX-License-Identifier: BSD-3-Clause
*/
#include <arch.h>
#include <arch_helpers.h>
#include "../bl2_private.h"
/*******************************************************************************
* Place holder function to perform any S-EL1 specific architectural setup. At
* the moment there is nothing to do.
******************************************************************************/
void bl2_arch_setup(void)
{
/* Give access to FP/SIMD registers */
write_cpacr(CPACR_EL1_FPEN(CPACR_EL1_FP_TRAP_NONE));
}

View File

@@ -0,0 +1,77 @@
/*
* Copyright (c) 2017, ARM Limited and Contributors. All rights reserved.
*
* SPDX-License-Identifier: BSD-3-Clause
*/
#include <arch.h>
#include <asm_macros.S>
#include <bl_common.h>
#include <el3_common_macros.S>
.globl bl2_entrypoint
.globl bl2_vector_table
.globl bl2_el3_run_image
.globl bl2_run_next_image
func bl2_entrypoint
/* Save arguments x0-x3 from previous Boot loader */
mov x20, x0
mov x21, x1
mov x22, x2
mov x23, x3
el3_entrypoint_common \
_init_sctlr=1 \
_warm_boot_mailbox=!PROGRAMMABLE_RESET_ADDRESS \
_secondary_cold_boot=!COLD_BOOT_SINGLE_CPU \
_init_memory=1 \
_init_c_runtime=1 \
_exception_vectors=bl2_el3_exceptions
/*
* Restore parameters of boot rom
*/
mov x0, x20
mov x1, x21
mov x2, x22
mov x3, x23
bl bl2_el3_early_platform_setup
bl bl2_el3_plat_arch_setup
/* ---------------------------------------------
* Jump to main function.
* ---------------------------------------------
*/
bl bl2_main
/* ---------------------------------------------
* Should never reach this point.
* ---------------------------------------------
*/
no_ret plat_panic_handler
endfunc bl2_entrypoint
func bl2_run_next_image
mov x20,x0
/*
* MMU needs to be disabled because both BL2 and BL31 execute
* in EL3, and therefore share the same address space.
* BL31 will initialize the address space according to its
* own requirement.
*/
bl disable_mmu_icache_el3
tlbi alle3
bl bl2_el3_plat_prepare_exit
ldp x0, x1, [x20, #ENTRY_POINT_INFO_PC_OFFSET]
msr elr_el3, x0
msr spsr_el3, x1
ldp x6, x7, [x20, #(ENTRY_POINT_INFO_ARGS_OFFSET + 0x30)]
ldp x4, x5, [x20, #(ENTRY_POINT_INFO_ARGS_OFFSET + 0x20)]
ldp x2, x3, [x20, #(ENTRY_POINT_INFO_ARGS_OFFSET + 0x10)]
ldp x0, x1, [x20, #(ENTRY_POINT_INFO_ARGS_OFFSET + 0x0)]
eret
endfunc bl2_run_next_image

View File

@@ -0,0 +1,131 @@
/*
* Copyright (c) 2017, ARM Limited and Contributors. All rights reserved.
*
* SPDX-License-Identifier: BSD-3-Clause
*/
#include <arch.h>
#include <asm_macros.S>
#include <bl1.h>
#include <bl_common.h>
#include <context.h>
/* -----------------------------------------------------------------------------
* Very simple stackless exception handlers used by BL2.
* -----------------------------------------------------------------------------
*/
.globl bl2_el3_exceptions
vector_base bl2_el3_exceptions
/* -----------------------------------------------------
* Current EL with SP0 : 0x0 - 0x200
* -----------------------------------------------------
*/
vector_entry SynchronousExceptionSP0
mov x0, #SYNC_EXCEPTION_SP_EL0
bl plat_report_exception
no_ret plat_panic_handler
check_vector_size SynchronousExceptionSP0
vector_entry IrqSP0
mov x0, #IRQ_SP_EL0
bl plat_report_exception
no_ret plat_panic_handler
check_vector_size IrqSP0
vector_entry FiqSP0
mov x0, #FIQ_SP_EL0
bl plat_report_exception
no_ret plat_panic_handler
check_vector_size FiqSP0
vector_entry SErrorSP0
mov x0, #SERROR_SP_EL0
bl plat_report_exception
no_ret plat_panic_handler
check_vector_size SErrorSP0
/* -----------------------------------------------------
* Current EL with SPx: 0x200 - 0x400
* -----------------------------------------------------
*/
vector_entry SynchronousExceptionSPx
mov x0, #SYNC_EXCEPTION_SP_ELX
bl plat_report_exception
no_ret plat_panic_handler
check_vector_size SynchronousExceptionSPx
vector_entry IrqSPx
mov x0, #IRQ_SP_ELX
bl plat_report_exception
no_ret plat_panic_handler
check_vector_size IrqSPx
vector_entry FiqSPx
mov x0, #FIQ_SP_ELX
bl plat_report_exception
no_ret plat_panic_handler
check_vector_size FiqSPx
vector_entry SErrorSPx
mov x0, #SERROR_SP_ELX
bl plat_report_exception
no_ret plat_panic_handler
check_vector_size SErrorSPx
/* -----------------------------------------------------
* Lower EL using AArch64 : 0x400 - 0x600
* -----------------------------------------------------
*/
vector_entry SynchronousExceptionA64
mov x0, #SYNC_EXCEPTION_AARCH64
bl plat_report_exception
no_ret plat_panic_handler
check_vector_size SynchronousExceptionA64
vector_entry IrqA64
mov x0, #IRQ_AARCH64
bl plat_report_exception
no_ret plat_panic_handler
check_vector_size IrqA64
vector_entry FiqA64
mov x0, #FIQ_AARCH64
bl plat_report_exception
no_ret plat_panic_handler
check_vector_size FiqA64
vector_entry SErrorA64
mov x0, #SERROR_AARCH64
bl plat_report_exception
no_ret plat_panic_handler
check_vector_size SErrorA64
/* -----------------------------------------------------
* Lower EL using AArch32 : 0x600 - 0x800
* -----------------------------------------------------
*/
vector_entry SynchronousExceptionA32
mov x0, #SYNC_EXCEPTION_AARCH32
bl plat_report_exception
no_ret plat_panic_handler
check_vector_size SynchronousExceptionA32
vector_entry IrqA32
mov x0, #IRQ_AARCH32
bl plat_report_exception
no_ret plat_panic_handler
check_vector_size IrqA32
vector_entry FiqA32
mov x0, #FIQ_AARCH32
bl plat_report_exception
no_ret plat_panic_handler
check_vector_size FiqA32
vector_entry SErrorA32
mov x0, #SERROR_AARCH32
bl plat_report_exception
no_ret plat_panic_handler
check_vector_size SErrorA32

View File

@@ -0,0 +1,127 @@
/*
* Copyright (c) 2013-2018, ARM Limited and Contributors. All rights reserved.
*
* SPDX-License-Identifier: BSD-3-Clause
*/
#include <arch.h>
#include <asm_macros.S>
#include <bl_common.h>
.globl bl2_entrypoint
func bl2_entrypoint
/*---------------------------------------------
* Save arguments x0 - x3 from BL1 for future
* use.
* ---------------------------------------------
*/
mov x20, x0
mov x21, x1
mov x22, x2
mov x23, x3
/* ---------------------------------------------
* Set the exception vector to something sane.
* ---------------------------------------------
*/
adr x0, early_exceptions
msr vbar_el1, x0
isb
/* ---------------------------------------------
* Enable the SError interrupt now that the
* exception vectors have been setup.
* ---------------------------------------------
*/
msr daifclr, #DAIF_ABT_BIT
/* ---------------------------------------------
* Enable the instruction cache, stack pointer
* and data access alignment checks
* ---------------------------------------------
*/
mov x1, #(SCTLR_I_BIT | SCTLR_A_BIT | SCTLR_SA_BIT)
mrs x0, sctlr_el1
orr x0, x0, x1
msr sctlr_el1, x0
isb
/* ---------------------------------------------
* Invalidate the RW memory used by the BL2
* image. This includes the data and NOBITS
* sections. This is done to safeguard against
* possible corruption of this memory by dirty
* cache lines in a system cache as a result of
* use by an earlier boot loader stage.
* ---------------------------------------------
*/
adr x0, __RW_START__
adr x1, __RW_END__
sub x1, x1, x0
bl inv_dcache_range
/* ---------------------------------------------
* Zero out NOBITS sections. There are 2 of them:
* - the .bss section;
* - the coherent memory section.
* ---------------------------------------------
*/
ldr x0, =__BSS_START__
ldr x1, =__BSS_SIZE__
bl zeromem
#if USE_COHERENT_MEM
ldr x0, =__COHERENT_RAM_START__
ldr x1, =__COHERENT_RAM_UNALIGNED_SIZE__
bl zeromem
#endif
/* --------------------------------------------
* Allocate a stack whose memory will be marked
* as Normal-IS-WBWA when the MMU is enabled.
* There is no risk of reading stale stack
* memory after enabling the MMU as only the
* primary cpu is running at the moment.
* --------------------------------------------
*/
bl plat_set_my_stack
/* ---------------------------------------------
* Initialize the stack protector canary before
* any C code is called.
* ---------------------------------------------
*/
#if STACK_PROTECTOR_ENABLED
bl update_stack_protector_canary
#endif
/* ---------------------------------------------
* Perform early platform setup & platform
* specific early arch. setup e.g. mmu setup
* ---------------------------------------------
*/
mov x0, x20
mov x1, x21
mov x2, x22
mov x3, x23
bl bl2_early_platform_setup2
bl bl2_plat_arch_setup
/* ---------------------------------------------
* Jump to main function.
* ---------------------------------------------
*/
bl bl2_main
/* ---------------------------------------------
* Should never reach this point.
* ---------------------------------------------
*/
no_ret plat_panic_handler
endfunc bl2_entrypoint

View File

@@ -0,0 +1,154 @@
/*
* Copyright (c) 2013-2018, ARM Limited and Contributors. All rights reserved.
*
* SPDX-License-Identifier: BSD-3-Clause
*/
#include <platform_def.h>
#include <xlat_tables_defs.h>
OUTPUT_FORMAT(PLATFORM_LINKER_FORMAT)
OUTPUT_ARCH(PLATFORM_LINKER_ARCH)
ENTRY(bl2_entrypoint)
MEMORY {
RAM (rwx): ORIGIN = BL2_BASE, LENGTH = BL2_LIMIT - BL2_BASE
}
SECTIONS
{
. = BL2_BASE;
ASSERT(. == ALIGN(PAGE_SIZE),
"BL2_BASE address is not aligned on a page boundary.")
#if SEPARATE_CODE_AND_RODATA
.text . : {
__TEXT_START__ = .;
*bl2_entrypoint.o(.text*)
*(.text*)
*(.vectors)
. = NEXT(PAGE_SIZE);
__TEXT_END__ = .;
} >RAM
.rodata . : {
__RODATA_START__ = .;
*(.rodata*)
/* Ensure 8-byte alignment for descriptors and ensure inclusion */
. = ALIGN(8);
__PARSER_LIB_DESCS_START__ = .;
KEEP(*(.img_parser_lib_descs))
__PARSER_LIB_DESCS_END__ = .;
. = NEXT(PAGE_SIZE);
__RODATA_END__ = .;
} >RAM
#else
ro . : {
__RO_START__ = .;
*bl2_entrypoint.o(.text*)
*(.text*)
*(.rodata*)
/* Ensure 8-byte alignment for descriptors and ensure inclusion */
. = ALIGN(8);
__PARSER_LIB_DESCS_START__ = .;
KEEP(*(.img_parser_lib_descs))
__PARSER_LIB_DESCS_END__ = .;
*(.vectors)
__RO_END_UNALIGNED__ = .;
/*
* Memory page(s) mapped to this section will be marked as
* read-only, executable. No RW data from the next section must
* creep in. Ensure the rest of the current memory page is unused.
*/
. = NEXT(PAGE_SIZE);
__RO_END__ = .;
} >RAM
#endif
/*
* Define a linker symbol to mark start of the RW memory area for this
* image.
*/
__RW_START__ = . ;
/*
* .data must be placed at a lower address than the stacks if the stack
* protector is enabled. Alternatively, the .data.stack_protector_canary
* section can be placed independently of the main .data section.
*/
.data . : {
__DATA_START__ = .;
*(.data*)
__DATA_END__ = .;
} >RAM
stacks (NOLOAD) : {
__STACKS_START__ = .;
*(tzfw_normal_stacks)
__STACKS_END__ = .;
} >RAM
/*
* The .bss section gets initialised to 0 at runtime.
* Its base address should be 16-byte aligned for better performance of the
* zero-initialization code.
*/
.bss : ALIGN(16) {
__BSS_START__ = .;
*(SORT_BY_ALIGNMENT(.bss*))
*(COMMON)
__BSS_END__ = .;
} >RAM
/*
* The xlat_table section is for full, aligned page tables (4K).
* Removing them from .bss avoids forcing 4K alignment on
* the .bss section. The tables are initialized to zero by the translation
* tables library.
*/
xlat_table (NOLOAD) : {
*(xlat_table)
} >RAM
#if USE_COHERENT_MEM
/*
* The base address of the coherent memory section must be page-aligned (4K)
* to guarantee that the coherent data are stored on their own pages and
* are not mixed with normal data. This is required to set up the correct
* memory attributes for the coherent data page tables.
*/
coherent_ram (NOLOAD) : ALIGN(PAGE_SIZE) {
__COHERENT_RAM_START__ = .;
*(tzfw_coherent_mem)
__COHERENT_RAM_END_UNALIGNED__ = .;
/*
* Memory page(s) mapped to this section will be marked
* as device memory. No other unexpected data must creep in.
* Ensure the rest of the current memory page is unused.
*/
. = NEXT(PAGE_SIZE);
__COHERENT_RAM_END__ = .;
} >RAM
#endif
/*
* Define a linker symbol to mark end of the RW memory area for this
* image.
*/
__RW_END__ = .;
__BL2_END__ = .;
__BSS_SIZE__ = SIZEOF(.bss);
#if USE_COHERENT_MEM
__COHERENT_RAM_UNALIGNED_SIZE__ =
__COHERENT_RAM_END_UNALIGNED__ - __COHERENT_RAM_START__;
#endif
ASSERT(. <= BL2_LIMIT, "BL2 image has exceeded its limit.")
}

View File

@@ -0,0 +1,35 @@
#
# Copyright (c) 2013-2017, ARM Limited and Contributors. All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
#
BL2_SOURCES += bl2/bl2_main.c \
bl2/${ARCH}/bl2_arch_setup.c \
lib/locks/exclusive/${ARCH}/spinlock.S \
plat/common/${ARCH}/platform_up_stack.S \
${MBEDTLS_COMMON_SOURCES} \
${MBEDTLS_CRYPTO_SOURCES} \
${MBEDTLS_X509_SOURCES}
ifeq (${ARCH},aarch64)
BL2_SOURCES += common/aarch64/early_exceptions.S
endif
ifeq (${LOAD_IMAGE_V2},1)
BL2_SOURCES += bl2/bl2_image_load_v2.c
else
BL2_SOURCES += bl2/bl2_image_load.c
endif
ifeq (${BL2_AT_EL3},0)
BL2_SOURCES += bl2/${ARCH}/bl2_entrypoint.S
BL2_LINKERFILE := bl2/bl2.ld.S
else
BL2_SOURCES += bl2/${ARCH}/bl2_el3_entrypoint.S \
bl2/${ARCH}/bl2_el3_exceptions.S \
lib/cpus/${ARCH}/cpu_helpers.S \
lib/cpus/errata_report.c
BL2_LINKERFILE := bl2/bl2_el3.ld.S
endif

View File

@@ -0,0 +1,238 @@
/*
* Copyright (c) 2017-2018, ARM Limited and Contributors. All rights reserved.
*
* SPDX-License-Identifier: BSD-3-Clause
*/
#include <platform_def.h>
#include <xlat_tables_defs.h>
OUTPUT_FORMAT(PLATFORM_LINKER_FORMAT)
OUTPUT_ARCH(PLATFORM_LINKER_ARCH)
ENTRY(bl2_entrypoint)
MEMORY {
#if BL2_IN_XIP_MEM
ROM (rx): ORIGIN = BL2_RO_BASE, LENGTH = BL2_RO_LIMIT - BL2_RO_BASE
RAM (rwx): ORIGIN = BL2_RW_BASE, LENGTH = BL2_RW_LIMIT - BL2_RW_BASE
#else
RAM (rwx): ORIGIN = BL2_BASE, LENGTH = BL2_LIMIT - BL2_BASE
#endif
}
SECTIONS
{
#if BL2_IN_XIP_MEM
. = BL2_RO_BASE;
ASSERT(. == ALIGN(PAGE_SIZE),
"BL2_RO_BASE address is not aligned on a page boundary.")
#else
. = BL2_BASE;
ASSERT(. == ALIGN(PAGE_SIZE),
"BL2_BASE address is not aligned on a page boundary.")
#endif
#if SEPARATE_CODE_AND_RODATA
.text . : {
__TEXT_START__ = .;
__TEXT_RESIDENT_START__ = .;
*bl2_el3_entrypoint.o(.text*)
*(.text.asm.*)
__TEXT_RESIDENT_END__ = .;
*(.text*)
*(.vectors)
. = NEXT(PAGE_SIZE);
__TEXT_END__ = .;
#if BL2_IN_XIP_MEM
} >ROM
#else
} >RAM
#endif
.rodata . : {
__RODATA_START__ = .;
*(.rodata*)
/* Ensure 8-byte alignment for descriptors and ensure inclusion */
. = ALIGN(8);
__PARSER_LIB_DESCS_START__ = .;
KEEP(*(.img_parser_lib_descs))
__PARSER_LIB_DESCS_END__ = .;
/*
* Ensure 8-byte alignment for cpu_ops so that its fields are also
* aligned. Also ensure cpu_ops inclusion.
*/
. = ALIGN(8);
__CPU_OPS_START__ = .;
KEEP(*(cpu_ops))
__CPU_OPS_END__ = .;
. = NEXT(PAGE_SIZE);
__RODATA_END__ = .;
#if BL2_IN_XIP_MEM
} >ROM
#else
} >RAM
#endif
ASSERT(__TEXT_RESIDENT_END__ - __TEXT_RESIDENT_START__ <= PAGE_SIZE,
"Resident part of BL2 has exceeded its limit.")
#else
ro . : {
__RO_START__ = .;
__TEXT_RESIDENT_START__ = .;
*bl2_el3_entrypoint.o(.text*)
*(.text.asm.*)
__TEXT_RESIDENT_END__ = .;
*(.text*)
*(.rodata*)
/*
* Ensure 8-byte alignment for cpu_ops so that its fields are also
* aligned. Also ensure cpu_ops inclusion.
*/
. = ALIGN(8);
__CPU_OPS_START__ = .;
KEEP(*(cpu_ops))
__CPU_OPS_END__ = .;
/* Ensure 8-byte alignment for descriptors and ensure inclusion */
. = ALIGN(8);
__PARSER_LIB_DESCS_START__ = .;
KEEP(*(.img_parser_lib_descs))
__PARSER_LIB_DESCS_END__ = .;
*(.vectors)
__RO_END_UNALIGNED__ = .;
/*
* Memory page(s) mapped to this section will be marked as
* read-only, executable. No RW data from the next section must
* creep in. Ensure the rest of the current memory page is unused.
*/
. = NEXT(PAGE_SIZE);
__RO_END__ = .;
#if BL2_IN_XIP_MEM
} >ROM
#else
} >RAM
#endif
#endif
ASSERT(__CPU_OPS_END__ > __CPU_OPS_START__,
"cpu_ops not defined for this platform.")
#if BL2_IN_XIP_MEM
. = BL2_RW_BASE;
ASSERT(BL2_RW_BASE == ALIGN(PAGE_SIZE),
"BL2_RW_BASE address is not aligned on a page boundary.")
#endif
/*
* Define a linker symbol to mark start of the RW memory area for this
* image.
*/
__RW_START__ = . ;
/*
* .data must be placed at a lower address than the stacks if the stack
* protector is enabled. Alternatively, the .data.stack_protector_canary
* section can be placed independently of the main .data section.
*/
.data . : {
__DATA_RAM_START__ = .;
*(.data*)
__DATA_RAM_END__ = .;
#if BL2_IN_XIP_MEM
} >RAM AT>ROM
#else
} >RAM
#endif
stacks (NOLOAD) : {
__STACKS_START__ = .;
*(tzfw_normal_stacks)
__STACKS_END__ = .;
} >RAM
/*
* The .bss section gets initialised to 0 at runtime.
* Its base address should be 16-byte aligned for better performance of the
* zero-initialization code.
*/
.bss : ALIGN(16) {
__BSS_START__ = .;
*(SORT_BY_ALIGNMENT(.bss*))
*(COMMON)
__BSS_END__ = .;
} >RAM
/*
* The xlat_table section is for full, aligned page tables (4K).
* Removing them from .bss avoids forcing 4K alignment on
* the .bss section. The tables are initialized to zero by the translation
* tables library.
*/
xlat_table (NOLOAD) : {
*(xlat_table)
} >RAM
#if USE_COHERENT_MEM
/*
* The base address of the coherent memory section must be page-aligned (4K)
* to guarantee that the coherent data are stored on their own pages and
* are not mixed with normal data. This is required to set up the correct
* memory attributes for the coherent data page tables.
*/
coherent_ram (NOLOAD) : ALIGN(PAGE_SIZE) {
__COHERENT_RAM_START__ = .;
*(tzfw_coherent_mem)
__COHERENT_RAM_END_UNALIGNED__ = .;
/*
* Memory page(s) mapped to this section will be marked
* as device memory. No other unexpected data must creep in.
* Ensure the rest of the current memory page is unused.
*/
. = NEXT(PAGE_SIZE);
__COHERENT_RAM_END__ = .;
} >RAM
#endif
/*
* Define a linker symbol to mark end of the RW memory area for this
* image.
*/
__RW_END__ = .;
__BL2_END__ = .;
#if BL2_IN_XIP_MEM
__BL2_RAM_START__ = ADDR(.data);
__BL2_RAM_END__ = .;
__DATA_ROM_START__ = LOADADDR(.data);
__DATA_SIZE__ = SIZEOF(.data);
/*
* The .data section is the last PROGBITS section so its end marks the end
* of BL2's RO content in XIP memory..
*/
__BL2_ROM_END__ = __DATA_ROM_START__ + __DATA_SIZE__;
ASSERT(__BL2_ROM_END__ <= BL2_RO_LIMIT,
"BL2's RO content has exceeded its limit.")
#endif
__BSS_SIZE__ = SIZEOF(.bss);
#if USE_COHERENT_MEM
__COHERENT_RAM_UNALIGNED_SIZE__ =
__COHERENT_RAM_END_UNALIGNED__ - __COHERENT_RAM_START__;
#endif
#if BL2_IN_XIP_MEM
ASSERT(. <= BL2_RW_LIMIT, "BL2's RW content has exceeded its limit.")
#else
ASSERT(. <= BL2_LIMIT, "BL2 image has exceeded its limit.")
#endif
}

View File

@@ -0,0 +1,261 @@
/*
* Copyright (c) 2016-2018, ARM Limited and Contributors. All rights reserved.
*
* SPDX-License-Identifier: BSD-3-Clause
*/
#include <arch.h>
#include <arch_helpers.h>
#include <assert.h>
#include <auth_mod.h>
#include <bl_common.h>
#include <debug.h>
#include <errno.h>
#include <platform.h>
#include <platform_def.h>
#include <stdint.h>
/*
* Check for platforms that use obsolete image terminology
*/
#ifdef BL30_BASE
# error "BL30_BASE platform define no longer used - please use SCP_BL2_BASE"
#endif
/*******************************************************************************
* Load the SCP_BL2 image if there's one.
* If a platform does not want to attempt to load SCP_BL2 image it must leave
* SCP_BL2_BASE undefined.
* Return 0 on success or if there's no SCP_BL2 image to load, a negative error
* code otherwise.
******************************************************************************/
static int load_scp_bl2(void)
{
int e = 0;
#ifdef SCP_BL2_BASE
meminfo_t scp_bl2_mem_info;
image_info_t scp_bl2_image_info;
/*
* It is up to the platform to specify where SCP_BL2 should be loaded if
* it exists. It could create space in the secure sram or point to a
* completely different memory.
*
* The entry point information is not relevant in this case as the AP
* won't execute the SCP_BL2 image.
*/
INFO("BL2: Loading SCP_BL2\n");
bl2_plat_get_scp_bl2_meminfo(&scp_bl2_mem_info);
scp_bl2_image_info.h.version = VERSION_1;
e = load_auth_image(&scp_bl2_mem_info,
SCP_BL2_IMAGE_ID,
SCP_BL2_BASE,
&scp_bl2_image_info,
NULL);
if (e == 0) {
/* The subsequent handling of SCP_BL2 is platform specific */
e = bl2_plat_handle_scp_bl2(&scp_bl2_image_info);
if (e) {
ERROR("Failure in platform-specific handling of SCP_BL2 image.\n");
}
}
#endif /* SCP_BL2_BASE */
return e;
}
#ifndef EL3_PAYLOAD_BASE
/*******************************************************************************
* Load the BL31 image.
* The bl2_to_bl31_params and bl31_ep_info params will be updated with the
* relevant BL31 information.
* Return 0 on success, a negative error code otherwise.
******************************************************************************/
static int load_bl31(bl31_params_t *bl2_to_bl31_params,
entry_point_info_t *bl31_ep_info)
{
meminfo_t *bl2_tzram_layout;
int e;
INFO("BL2: Loading BL31\n");
assert(bl2_to_bl31_params != NULL);
assert(bl31_ep_info != NULL);
/* Find out how much free trusted ram remains after BL2 load */
bl2_tzram_layout = bl2_plat_sec_mem_layout();
/* Set the X0 parameter to BL31 */
bl31_ep_info->args.arg0 = (unsigned long)bl2_to_bl31_params;
/* Load the BL31 image */
e = load_auth_image(bl2_tzram_layout,
BL31_IMAGE_ID,
BL31_BASE,
bl2_to_bl31_params->bl31_image_info,
bl31_ep_info);
if (e == 0) {
bl2_plat_set_bl31_ep_info(bl2_to_bl31_params->bl31_image_info,
bl31_ep_info);
}
return e;
}
/*******************************************************************************
* Load the BL32 image if there's one.
* The bl2_to_bl31_params param will be updated with the relevant BL32
* information.
* If a platform does not want to attempt to load BL32 image it must leave
* BL32_BASE undefined.
* Return 0 on success or if there's no BL32 image to load, a negative error
* code otherwise.
******************************************************************************/
static int load_bl32(bl31_params_t *bl2_to_bl31_params)
{
int e = 0;
#ifdef BL32_BASE
meminfo_t bl32_mem_info;
INFO("BL2: Loading BL32\n");
assert(bl2_to_bl31_params != NULL);
/*
* It is up to the platform to specify where BL32 should be loaded if
* it exists. It could create space in the secure sram or point to a
* completely different memory.
*/
bl2_plat_get_bl32_meminfo(&bl32_mem_info);
e = load_auth_image(&bl32_mem_info,
BL32_IMAGE_ID,
BL32_BASE,
bl2_to_bl31_params->bl32_image_info,
bl2_to_bl31_params->bl32_ep_info);
if (e == 0) {
bl2_plat_set_bl32_ep_info(
bl2_to_bl31_params->bl32_image_info,
bl2_to_bl31_params->bl32_ep_info);
}
#endif /* BL32_BASE */
return e;
}
#ifndef PRELOADED_BL33_BASE
/*******************************************************************************
* Load the BL33 image.
* The bl2_to_bl31_params param will be updated with the relevant BL33
* information.
* Return 0 on success, a negative error code otherwise.
******************************************************************************/
static int load_bl33(bl31_params_t *bl2_to_bl31_params)
{
meminfo_t bl33_mem_info;
int e;
INFO("BL2: Loading BL33\n");
assert(bl2_to_bl31_params != NULL);
bl2_plat_get_bl33_meminfo(&bl33_mem_info);
/* Load the BL33 image in non-secure memory provided by the platform */
e = load_auth_image(&bl33_mem_info,
BL33_IMAGE_ID,
plat_get_ns_image_entrypoint(),
bl2_to_bl31_params->bl33_image_info,
bl2_to_bl31_params->bl33_ep_info);
if (e == 0) {
bl2_plat_set_bl33_ep_info(bl2_to_bl31_params->bl33_image_info,
bl2_to_bl31_params->bl33_ep_info);
}
return e;
}
#endif /* PRELOADED_BL33_BASE */
#endif /* EL3_PAYLOAD_BASE */
/*******************************************************************************
* This function loads SCP_BL2/BL3x images and returns the ep_info for
* the next executable image.
******************************************************************************/
struct entry_point_info *bl2_load_images(void)
{
bl31_params_t *bl2_to_bl31_params;
entry_point_info_t *bl31_ep_info;
int e;
e = load_scp_bl2();
if (e) {
ERROR("Failed to load SCP_BL2 (%i)\n", e);
plat_error_handler(e);
}
/* Perform platform setup in BL2 after loading SCP_BL2 */
bl2_platform_setup();
/*
* Get a pointer to the memory the platform has set aside to pass
* information to BL31.
*/
bl2_to_bl31_params = bl2_plat_get_bl31_params();
bl31_ep_info = bl2_plat_get_bl31_ep_info();
#ifdef EL3_PAYLOAD_BASE
/*
* In the case of an EL3 payload, we don't need to load any further
* images. Just update the BL31 entrypoint info structure to make BL1
* jump to the EL3 payload.
* The pointer to the memory the platform has set aside to pass
* information to BL31 in the normal boot flow is reused here, even
* though only a fraction of the information contained in the
* bl31_params_t structure makes sense in the context of EL3 payloads.
* This will be refined in the future.
*/
INFO("BL2: Populating the entrypoint info for the EL3 payload\n");
bl31_ep_info->pc = EL3_PAYLOAD_BASE;
bl31_ep_info->args.arg0 = (unsigned long) bl2_to_bl31_params;
bl2_plat_set_bl31_ep_info(NULL, bl31_ep_info);
#else
e = load_bl31(bl2_to_bl31_params, bl31_ep_info);
if (e) {
ERROR("Failed to load BL31 (%i)\n", e);
plat_error_handler(e);
}
e = load_bl32(bl2_to_bl31_params);
if (e) {
if (e == -EAUTH) {
ERROR("Failed to authenticate BL32\n");
plat_error_handler(e);
} else {
WARN("Failed to load BL32 (%i)\n", e);
}
}
#ifdef PRELOADED_BL33_BASE
/*
* In this case, don't load the BL33 image as it's already loaded in
* memory. Update BL33 entrypoint information.
*/
INFO("BL2: Populating the entrypoint info for the preloaded BL33\n");
bl2_to_bl31_params->bl33_ep_info->pc = PRELOADED_BL33_BASE;
bl2_plat_set_bl33_ep_info(NULL, bl2_to_bl31_params->bl33_ep_info);
#else
e = load_bl33(bl2_to_bl31_params);
if (e) {
ERROR("Failed to load BL33 (%i)\n", e);
plat_error_handler(e);
}
#endif /* PRELOADED_BL33_BASE */
#endif /* EL3_PAYLOAD_BASE */
/* Flush the params to be passed to memory */
bl2_plat_flush_bl31_params();
return bl31_ep_info;
}

View File

@@ -0,0 +1,106 @@
/*
* Copyright (c) 2016-2018, ARM Limited and Contributors. All rights reserved.
*
* SPDX-License-Identifier: BSD-3-Clause
*/
#include <arch.h>
#include <arch_helpers.h>
#include <assert.h>
#include <auth_mod.h>
#include <bl_common.h>
#include <debug.h>
#include <desc_image_load.h>
#include <platform.h>
#include <platform_def.h>
#include <stdint.h>
#include "bl2_private.h"
/*******************************************************************************
* This function loads SCP_BL2/BL3x images and returns the ep_info for
* the next executable image.
******************************************************************************/
struct entry_point_info *bl2_load_images(void)
{
bl_params_t *bl2_to_next_bl_params;
bl_load_info_t *bl2_load_info;
const bl_load_info_node_t *bl2_node_info;
int plat_setup_done = 0;
int err;
/*
* Get information about the images to load.
*/
bl2_load_info = plat_get_bl_image_load_info();
assert(bl2_load_info);
assert(bl2_load_info->head);
assert(bl2_load_info->h.type == PARAM_BL_LOAD_INFO);
assert(bl2_load_info->h.version >= VERSION_2);
bl2_node_info = bl2_load_info->head;
while (bl2_node_info) {
/*
* Perform platform setup before loading the image,
* if indicated in the image attributes AND if NOT
* already done before.
*/
if (bl2_node_info->image_info->h.attr & IMAGE_ATTRIB_PLAT_SETUP) {
if (plat_setup_done) {
WARN("BL2: Platform setup already done!!\n");
} else {
INFO("BL2: Doing platform setup\n");
bl2_platform_setup();
plat_setup_done = 1;
}
}
err = bl2_plat_handle_pre_image_load(bl2_node_info->image_id);
if (err) {
ERROR("BL2: Failure in pre image load handling (%i)\n", err);
plat_error_handler(err);
}
if (!(bl2_node_info->image_info->h.attr & IMAGE_ATTRIB_SKIP_LOADING)) {
INFO("BL2: Loading image id %d\n", bl2_node_info->image_id);
err = load_auth_image(bl2_node_info->image_id,
bl2_node_info->image_info);
if (err) {
ERROR("BL2: Failed to load image (%i)\n", err);
plat_error_handler(err);
}
} else {
INFO("BL2: Skip loading image id %d\n", bl2_node_info->image_id);
}
/* Allow platform to handle image information. */
err = bl2_plat_handle_post_image_load(bl2_node_info->image_id);
if (err) {
ERROR("BL2: Failure in post image load handling (%i)\n", err);
plat_error_handler(err);
}
/* Go to next image */
bl2_node_info = bl2_node_info->next_load_info;
}
/*
* Get information to pass to the next image.
*/
bl2_to_next_bl_params = plat_get_next_bl_params();
assert(bl2_to_next_bl_params);
assert(bl2_to_next_bl_params->head);
assert(bl2_to_next_bl_params->h.type == PARAM_BL_PARAMS);
assert(bl2_to_next_bl_params->h.version >= VERSION_2);
assert(bl2_to_next_bl_params->head->ep_info);
/* Populate arg0 for the next BL image if not already provided */
if (bl2_to_next_bl_params->head->ep_info->args.arg0 == (u_register_t)0)
bl2_to_next_bl_params->head->ep_info->args.arg0 =
(u_register_t)bl2_to_next_bl_params;
/* Flush the parameters to be passed to next image */
plat_flush_next_bl_params();
return bl2_to_next_bl_params->head->ep_info;
}

View File

@@ -0,0 +1,74 @@
/*
* Copyright (c) 2013-2018, ARM Limited and Contributors. All rights reserved.
*
* SPDX-License-Identifier: BSD-3-Clause
*/
#include <arch_helpers.h>
#include <auth_mod.h>
#include <bl1.h>
#include <bl2.h>
#include <bl_common.h>
#include <console.h>
#include <debug.h>
#include <platform.h>
#include "bl2_private.h"
#ifdef AARCH32
#define NEXT_IMAGE "BL32"
#else
#define NEXT_IMAGE "BL31"
#endif
/*******************************************************************************
* The only thing to do in BL2 is to load further images and pass control to
* next BL. The memory occupied by BL2 will be reclaimed by BL3x stages. BL2
* runs entirely in S-EL1.
******************************************************************************/
void bl2_main(void)
{
entry_point_info_t *next_bl_ep_info;
NOTICE("BL2: %s\n", version_string);
NOTICE("BL2: %s\n", build_message);
/* Perform remaining generic architectural setup in S-EL1 */
bl2_arch_setup();
#if TRUSTED_BOARD_BOOT
/* Initialize authentication module */
auth_mod_init();
#endif /* TRUSTED_BOARD_BOOT */
/* initialize boot source */
bl2_plat_preload_setup();
/* Load the subsequent bootloader images. */
next_bl_ep_info = bl2_load_images();
#if !BL2_AT_EL3
#ifdef AARCH32
/*
* For AArch32 state BL1 and BL2 share the MMU setup.
* Given that BL2 does not map BL1 regions, MMU needs
* to be disabled in order to go back to BL1.
*/
disable_mmu_icache_secure();
#endif /* AARCH32 */
console_flush();
/*
* Run next BL image via an SMC to BL1. Information on how to pass
* control to the BL32 (if present) and BL33 software images will
* be passed to next BL image as an argument.
*/
smc(BL1_SMC_RUN_IMAGE, (unsigned long)next_bl_ep_info, 0, 0, 0, 0, 0, 0);
#else
NOTICE("BL2: Booting " NEXT_IMAGE "\n");
print_entry_point_info(next_bl_ep_info);
console_flush();
bl2_run_next_image(next_bl_ep_info);
#endif
}

View File

@@ -0,0 +1,36 @@
/*
* Copyright (c) 2013-2018, ARM Limited and Contributors. All rights reserved.
*
* SPDX-License-Identifier: BSD-3-Clause
*/
#ifndef __BL2_PRIVATE_H__
#define __BL2_PRIVATE_H__
#if BL2_IN_XIP_MEM
/*******************************************************************************
* Declarations of linker defined symbols which will tell us where BL2 lives
* in Trusted ROM and RAM
******************************************************************************/
extern uintptr_t __BL2_ROM_END__;
#define BL2_ROM_END (uintptr_t)(&__BL2_ROM_END__)
extern uintptr_t __BL2_RAM_START__;
extern uintptr_t __BL2_RAM_END__;
#define BL2_RAM_BASE (uintptr_t)(&__BL2_RAM_START__)
#define BL2_RAM_LIMIT (uintptr_t)(&__BL2_RAM_END__)
#endif
/******************************************
* Forward declarations
*****************************************/
struct entry_point_info;
/******************************************
* Function prototypes
*****************************************/
void bl2_arch_setup(void);
struct entry_point_info *bl2_load_images(void);
void bl2_run_next_image(const struct entry_point_info *bl_ep_info);
#endif /* __BL2_PRIVATE_H__ */

View File

@@ -0,0 +1,126 @@
/*
* Copyright (c) 2016, ARM Limited and Contributors. All rights reserved.
*
* SPDX-License-Identifier: BSD-3-Clause
*/
#include <arch.h>
#include <asm_macros.S>
#include <bl_common.h>
.globl bl2u_vector_table
.globl bl2u_entrypoint
vector_base bl2u_vector_table
b bl2u_entrypoint
b report_exception /* Undef */
b report_exception /* SVC call */
b report_exception /* Prefetch abort */
b report_exception /* Data abort */
b report_exception /* Reserved */
b report_exception /* IRQ */
b report_exception /* FIQ */
func bl2u_entrypoint
/*---------------------------------------------
* Save from r1 the extents of the trusted ram
* available to BL2U for future use.
* r0 is not currently used.
* ---------------------------------------------
*/
mov r11, r1
mov r10, r2
/* ---------------------------------------------
* Set the exception vector to something sane.
* ---------------------------------------------
*/
ldr r0, =bl2u_vector_table
stcopr r0, VBAR
isb
/* -----------------------------------------------------
* Enable the instruction cache
* -----------------------------------------------------
*/
ldcopr r0, SCTLR
orr r0, r0, #SCTLR_I_BIT
stcopr r0, SCTLR
isb
/* ---------------------------------------------
* Since BL2U executes after BL1, it is assumed
* here that BL1 has already has done the
* necessary register initializations.
* ---------------------------------------------
*/
/* ---------------------------------------------
* Invalidate the RW memory used by the BL2U
* image. This includes the data and NOBITS
* sections. This is done to safeguard against
* possible corruption of this memory by dirty
* cache lines in a system cache as a result of
* use by an earlier boot loader stage.
* ---------------------------------------------
*/
ldr r0, =__RW_START__
ldr r1, =__RW_END__
sub r1, r1, r0
bl inv_dcache_range
/* ---------------------------------------------
* Zero out NOBITS sections. There are 2 of them:
* - the .bss section;
* - the coherent memory section.
* ---------------------------------------------
*/
ldr r0, =__BSS_START__
ldr r1, =__BSS_SIZE__
bl zeromem
/* --------------------------------------------
* Allocate a stack whose memory will be marked
* as Normal-IS-WBWA when the MMU is enabled.
* There is no risk of reading stale stack
* memory after enabling the MMU as only the
* primary cpu is running at the moment.
* --------------------------------------------
*/
bl plat_set_my_stack
/* ---------------------------------------------
* Initialize the stack protector canary before
* any C code is called.
* ---------------------------------------------
*/
#if STACK_PROTECTOR_ENABLED
bl update_stack_protector_canary
#endif
/* ---------------------------------------------
* Perform early platform setup & platform
* specific early arch. setup e.g. mmu setup
* ---------------------------------------------
*/
mov r0, r11
mov r1, r10
bl bl2u_early_platform_setup
bl bl2u_plat_arch_setup
/* ---------------------------------------------
* Jump to main function.
* ---------------------------------------------
*/
bl bl2u_main
/* ---------------------------------------------
* Should never reach this point.
* ---------------------------------------------
*/
no_ret plat_panic_handler
endfunc bl2u_entrypoint

View File

@@ -0,0 +1,116 @@
/*
* Copyright (c) 2015-2017, ARM Limited and Contributors. All rights reserved.
*
* SPDX-License-Identifier: BSD-3-Clause
*/
#include <arch.h>
#include <asm_macros.S>
#include <bl_common.h>
.globl bl2u_entrypoint
func bl2u_entrypoint
/*---------------------------------------------
* Store the extents of the tzram available to
* BL2U and other platform specific information
* for future use. x0 is currently not used.
* ---------------------------------------------
*/
mov x20, x1
mov x21, x2
/* ---------------------------------------------
* Set the exception vector to something sane.
* ---------------------------------------------
*/
adr x0, early_exceptions
msr vbar_el1, x0
isb
/* ---------------------------------------------
* Enable the SError interrupt now that the
* exception vectors have been setup.
* ---------------------------------------------
*/
msr daifclr, #DAIF_ABT_BIT
/* ---------------------------------------------
* Enable the instruction cache, stack pointer
* and data access alignment checks
* ---------------------------------------------
*/
mov x1, #(SCTLR_I_BIT | SCTLR_A_BIT | SCTLR_SA_BIT)
mrs x0, sctlr_el1
orr x0, x0, x1
msr sctlr_el1, x0
isb
/* ---------------------------------------------
* Invalidate the RW memory used by the BL2U
* image. This includes the data and NOBITS
* sections. This is done to safeguard against
* possible corruption of this memory by dirty
* cache lines in a system cache as a result of
* use by an earlier boot loader stage.
* ---------------------------------------------
*/
adr x0, __RW_START__
adr x1, __RW_END__
sub x1, x1, x0
bl inv_dcache_range
/* ---------------------------------------------
* Zero out NOBITS sections. There are 2 of them:
* - the .bss section;
* - the coherent memory section.
* ---------------------------------------------
*/
ldr x0, =__BSS_START__
ldr x1, =__BSS_SIZE__
bl zeromem
/* --------------------------------------------
* Allocate a stack whose memory will be marked
* as Normal-IS-WBWA when the MMU is enabled.
* There is no risk of reading stale stack
* memory after enabling the MMU as only the
* primary cpu is running at the moment.
* --------------------------------------------
*/
bl plat_set_my_stack
/* ---------------------------------------------
* Initialize the stack protector canary before
* any C code is called.
* ---------------------------------------------
*/
#if STACK_PROTECTOR_ENABLED
bl update_stack_protector_canary
#endif
/* ---------------------------------------------
* Perform early platform setup & platform
* specific early arch. setup e.g. mmu setup
* ---------------------------------------------
*/
mov x0, x20
mov x1, x21
bl bl2u_early_platform_setup
bl bl2u_plat_arch_setup
/* ---------------------------------------------
* Jump to bl2u_main function.
* ---------------------------------------------
*/
bl bl2u_main
/* ---------------------------------------------
* Should never reach this point.
* ---------------------------------------------
*/
no_ret plat_panic_handler
endfunc bl2u_entrypoint

View File

@@ -0,0 +1,136 @@
/*
* Copyright (c) 2015-2018, ARM Limited and Contributors. All rights reserved.
*
* SPDX-License-Identifier: BSD-3-Clause
*/
#include <platform_def.h>
#include <xlat_tables_defs.h>
OUTPUT_FORMAT(PLATFORM_LINKER_FORMAT)
OUTPUT_ARCH(PLATFORM_LINKER_ARCH)
ENTRY(bl2u_entrypoint)
MEMORY {
RAM (rwx): ORIGIN = BL2U_BASE, LENGTH = BL2U_LIMIT - BL2U_BASE
}
SECTIONS
{
. = BL2U_BASE;
ASSERT(. == ALIGN(PAGE_SIZE),
"BL2U_BASE address is not aligned on a page boundary.")
#if SEPARATE_CODE_AND_RODATA
.text . : {
__TEXT_START__ = .;
*bl2u_entrypoint.o(.text*)
*(.text*)
*(.vectors)
. = NEXT(PAGE_SIZE);
__TEXT_END__ = .;
} >RAM
.rodata . : {
__RODATA_START__ = .;
*(.rodata*)
. = NEXT(PAGE_SIZE);
__RODATA_END__ = .;
} >RAM
#else
ro . : {
__RO_START__ = .;
*bl2u_entrypoint.o(.text*)
*(.text*)
*(.rodata*)
*(.vectors)
__RO_END_UNALIGNED__ = .;
/*
* Memory page(s) mapped to this section will be marked as
* read-only, executable. No RW data from the next section must
* creep in. Ensure the rest of the current memory page is unused.
*/
. = NEXT(PAGE_SIZE);
__RO_END__ = .;
} >RAM
#endif
/*
* Define a linker symbol to mark start of the RW memory area for this
* image.
*/
__RW_START__ = . ;
/*
* .data must be placed at a lower address than the stacks if the stack
* protector is enabled. Alternatively, the .data.stack_protector_canary
* section can be placed independently of the main .data section.
*/
.data . : {
__DATA_START__ = .;
*(.data*)
__DATA_END__ = .;
} >RAM
stacks (NOLOAD) : {
__STACKS_START__ = .;
*(tzfw_normal_stacks)
__STACKS_END__ = .;
} >RAM
/*
* The .bss section gets initialised to 0 at runtime.
* Its base address should be 16-byte aligned for better performance of the
* zero-initialization code.
*/
.bss : ALIGN(16) {
__BSS_START__ = .;
*(SORT_BY_ALIGNMENT(.bss*))
*(COMMON)
__BSS_END__ = .;
} >RAM
/*
* The xlat_table section is for full, aligned page tables (4K).
* Removing them from .bss avoids forcing 4K alignment on
* the .bss section. The tables are initialized to zero by the translation
* tables library.
*/
xlat_table (NOLOAD) : {
*(xlat_table)
} >RAM
#if USE_COHERENT_MEM
/*
* The base address of the coherent memory section must be page-aligned (4K)
* to guarantee that the coherent data are stored on their own pages and
* are not mixed with normal data. This is required to set up the correct
* memory attributes for the coherent data page tables.
*/
coherent_ram (NOLOAD) : ALIGN(PAGE_SIZE) {
__COHERENT_RAM_START__ = .;
*(tzfw_coherent_mem)
__COHERENT_RAM_END_UNALIGNED__ = .;
/*
* Memory page(s) mapped to this section will be marked
* as device memory. No other unexpected data must creep in.
* Ensure the rest of the current memory page is unused.
*/
. = NEXT(PAGE_SIZE);
__COHERENT_RAM_END__ = .;
} >RAM
#endif
/*
* Define a linker symbol to mark end of the RW memory area for this
* image.
*/
__RW_END__ = .;
__BL2U_END__ = .;
__BSS_SIZE__ = SIZEOF(.bss);
ASSERT(. <= BL2U_LIMIT, "BL2U image has exceeded its limit.")
}

View File

@@ -0,0 +1,15 @@
#
# Copyright (c) 2015-2017, ARM Limited and Contributors. All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
#
BL2U_SOURCES += bl2u/bl2u_main.c \
bl2u/${ARCH}/bl2u_entrypoint.S \
plat/common/${ARCH}/platform_up_stack.S
ifeq (${ARCH},aarch64)
BL2U_SOURCES += common/aarch64/early_exceptions.S
endif
BL2U_LINKERFILE := bl2u/bl2u.ld.S

View File

@@ -0,0 +1,63 @@
/*
* Copyright (c) 2015-2018, ARM Limited and Contributors. All rights reserved.
*
* SPDX-License-Identifier: BSD-3-Clause
*/
#include <arch.h>
#include <arch_helpers.h>
#include <assert.h>
#include <auth_mod.h>
#include <bl1.h>
#include <bl2u.h>
#include <bl_common.h>
#include <console.h>
#include <debug.h>
#include <platform.h>
#include <platform_def.h>
#include <stdint.h>
/*******************************************************************************
* This function is responsible to:
* Load SCP_BL2U if platform has defined SCP_BL2U_BASE
* Perform platform setup.
* Go back to EL3.
******************************************************************************/
void bl2u_main(void)
{
NOTICE("BL2U: %s\n", version_string);
NOTICE("BL2U: %s\n", build_message);
#if SCP_BL2U_BASE
int rc;
/* Load the subsequent bootloader images */
rc = bl2u_plat_handle_scp_bl2u();
if (rc) {
ERROR("Failed to load SCP_BL2U (%i)\n", rc);
panic();
}
#endif
/* Perform platform setup in BL2U after loading SCP_BL2U */
bl2u_platform_setup();
console_flush();
#ifdef AARCH32
/*
* For AArch32 state BL1 and BL2U share the MMU setup.
* Given that BL2U does not map BL1 regions, MMU needs
* to be disabled in order to go back to BL1.
*/
disable_mmu_icache_secure();
#endif /* AARCH32 */
/*
* Indicate that BL2U is done and resume back to
* normal world via an SMC to BL1.
* x1 could be passed to Normal world,
* so DO NOT pass any secret information.
*/
smc(FWU_SMC_SEC_IMAGE_DONE, 0, 0, 0, 0, 0, 0, 0);
wfi();
}

View File

@@ -0,0 +1,204 @@
/*
* Copyright (c) 2013-2018, ARM Limited and Contributors. All rights reserved.
*
* SPDX-License-Identifier: BSD-3-Clause
*/
#include <arch.h>
#include <bl_common.h>
#include <el3_common_macros.S>
#include <pmf_asm_macros.S>
#include <runtime_instr.h>
#include <xlat_mmu_helpers.h>
.globl bl31_entrypoint
.globl bl31_warm_entrypoint
/* -----------------------------------------------------
* bl31_entrypoint() is the cold boot entrypoint,
* executed only by the primary cpu.
* -----------------------------------------------------
*/
func bl31_entrypoint
#if !RESET_TO_BL31
/* ---------------------------------------------------------------
* Stash the previous bootloader arguments x0 - x3 for later use.
* ---------------------------------------------------------------
*/
mov x20, x0
mov x21, x1
mov x22, x2
mov x23, x3
/* ---------------------------------------------------------------------
* For !RESET_TO_BL31 systems, only the primary CPU ever reaches
* bl31_entrypoint() during the cold boot flow, so the cold/warm boot
* and primary/secondary CPU logic should not be executed in this case.
*
* Also, assume that the previous bootloader has already initialised the
* SCTLR_EL3, including the endianness, and has initialised the memory.
* ---------------------------------------------------------------------
*/
el3_entrypoint_common \
_init_sctlr=0 \
_warm_boot_mailbox=0 \
_secondary_cold_boot=0 \
_init_memory=0 \
_init_c_runtime=1 \
_exception_vectors=runtime_exceptions
#else
/* ---------------------------------------------------------------------
* For RESET_TO_BL31 systems which have a programmable reset address,
* bl31_entrypoint() is executed only on the cold boot path so we can
* skip the warm boot mailbox mechanism.
* ---------------------------------------------------------------------
*/
el3_entrypoint_common \
_init_sctlr=1 \
_warm_boot_mailbox=!PROGRAMMABLE_RESET_ADDRESS \
_secondary_cold_boot=!COLD_BOOT_SINGLE_CPU \
_init_memory=1 \
_init_c_runtime=1 \
_exception_vectors=runtime_exceptions
/* ---------------------------------------------------------------------
* For RESET_TO_BL31 systems, BL31 is the first bootloader to run so
* there's no argument to relay from a previous bootloader. Zero the
* arguments passed to the platform layer to reflect that.
* ---------------------------------------------------------------------
*/
mov x20, 0
mov x21, 0
mov x22, 0
mov x23, 0
#endif /* RESET_TO_BL31 */
/* ---------------------------------------------
* Perform platform specific early arch. setup
* ---------------------------------------------
*/
mov x0, x20
mov x1, x21
mov x2, x22
mov x3, x23
bl bl31_early_platform_setup2
bl bl31_plat_arch_setup
/* ---------------------------------------------
* Jump to main function.
* ---------------------------------------------
*/
bl bl31_main
/* -------------------------------------------------------------
* Clean the .data & .bss sections to main memory. This ensures
* that any global data which was initialised by the primary CPU
* is visible to secondary CPUs before they enable their data
* caches and participate in coherency.
* -------------------------------------------------------------
*/
adr x0, __DATA_START__
adr x1, __DATA_END__
sub x1, x1, x0
bl clean_dcache_range
adr x0, __BSS_START__
adr x1, __BSS_END__
sub x1, x1, x0
bl clean_dcache_range
b el3_exit
endfunc bl31_entrypoint
/* --------------------------------------------------------------------
* This CPU has been physically powered up. It is either resuming from
* suspend or has simply been turned on. In both cases, call the BL31
* warmboot entrypoint
* --------------------------------------------------------------------
*/
func bl31_warm_entrypoint
#if ENABLE_RUNTIME_INSTRUMENTATION
/*
* This timestamp update happens with cache off. The next
* timestamp collection will need to do cache maintenance prior
* to timestamp update.
*/
pmf_calc_timestamp_addr rt_instr_svc RT_INSTR_EXIT_HW_LOW_PWR
mrs x1, cntpct_el0
str x1, [x0]
#endif
/*
* On the warm boot path, most of the EL3 initialisations performed by
* 'el3_entrypoint_common' must be skipped:
*
* - Only when the platform bypasses the BL1/BL31 entrypoint by
* programming the reset address do we need to initialise SCTLR_EL3.
* In other cases, we assume this has been taken care by the
* entrypoint code.
*
* - No need to determine the type of boot, we know it is a warm boot.
*
* - Do not try to distinguish between primary and secondary CPUs, this
* notion only exists for a cold boot.
*
* - No need to initialise the memory or the C runtime environment,
* it has been done once and for all on the cold boot path.
*/
el3_entrypoint_common \
_init_sctlr=PROGRAMMABLE_RESET_ADDRESS \
_warm_boot_mailbox=0 \
_secondary_cold_boot=0 \
_init_memory=0 \
_init_c_runtime=0 \
_exception_vectors=runtime_exceptions
/*
* We're about to enable MMU and participate in PSCI state coordination.
*
* The PSCI implementation invokes platform routines that enable CPUs to
* participate in coherency. On a system where CPUs are not
* cache-coherent without appropriate platform specific programming,
* having caches enabled until such time might lead to coherency issues
* (resulting from stale data getting speculatively fetched, among
* others). Therefore we keep data caches disabled even after enabling
* the MMU for such platforms.
*
* On systems with hardware-assisted coherency, or on single cluster
* platforms, such platform specific programming is not required to
* enter coherency (as CPUs already are); and there's no reason to have
* caches disabled either.
*/
mov x0, #DISABLE_DCACHE
bl bl31_plat_enable_mmu
#if HW_ASSISTED_COHERENCY || WARMBOOT_ENABLE_DCACHE_EARLY
mrs x0, sctlr_el3
orr x0, x0, #SCTLR_C_BIT
msr sctlr_el3, x0
isb
#endif
bl psci_warmboot_entrypoint
#if ENABLE_RUNTIME_INSTRUMENTATION
pmf_calc_timestamp_addr rt_instr_svc RT_INSTR_EXIT_PSCI
mov x19, x0
/*
* Invalidate before updating timestamp to ensure previous timestamp
* updates on the same cache line with caches disabled are properly
* seen by the same core. Without the cache invalidate, the core might
* write into a stale cache line.
*/
mov x1, #PMF_TS_SIZE
mov x20, x30
bl inv_dcache_range
mov x30, x20
mrs x0, cntpct_el0
str x0, [x19]
#endif
b el3_exit
endfunc bl31_warm_entrypoint

View File

@@ -0,0 +1,342 @@
/*
* Copyright (c) 2014-2017, ARM Limited and Contributors. All rights reserved.
*
* SPDX-License-Identifier: BSD-3-Clause
*/
#include <arch.h>
#include <asm_macros.S>
#include <context.h>
#include <cpu_data.h>
#include <plat_macros.S>
#include <platform_def.h>
#include <utils_def.h>
.globl report_unhandled_exception
.globl report_unhandled_interrupt
.globl el3_panic
#if CRASH_REPORTING
/* ------------------------------------------------------
* The below section deals with dumping the system state
* when an unhandled exception is taken in EL3.
* The layout and the names of the registers which will
* be dumped during a unhandled exception is given below.
* ------------------------------------------------------
*/
.section .rodata.crash_prints, "aS"
print_spacer:
.asciz " =\t\t0x"
gp_regs:
.asciz "x0", "x1", "x2", "x3", "x4", "x5", "x6", "x7",\
"x8", "x9", "x10", "x11", "x12", "x13", "x14", "x15",\
"x16", "x17", "x18", "x19", "x20", "x21", "x22",\
"x23", "x24", "x25", "x26", "x27", "x28", "x29", ""
el3_sys_regs:
.asciz "scr_el3", "sctlr_el3", "cptr_el3", "tcr_el3",\
"daif", "mair_el3", "spsr_el3", "elr_el3", "ttbr0_el3",\
"esr_el3", "far_el3", ""
non_el3_sys_regs:
.asciz "spsr_el1", "elr_el1", "spsr_abt", "spsr_und",\
"spsr_irq", "spsr_fiq", "sctlr_el1", "actlr_el1", "cpacr_el1",\
"csselr_el1", "sp_el1", "esr_el1", "ttbr0_el1", "ttbr1_el1",\
"mair_el1", "amair_el1", "tcr_el1", "tpidr_el1", "tpidr_el0",\
"tpidrro_el0", "dacr32_el2", "ifsr32_el2", "par_el1",\
"mpidr_el1", "afsr0_el1", "afsr1_el1", "contextidr_el1",\
"vbar_el1", "cntp_ctl_el0", "cntp_cval_el0", "cntv_ctl_el0",\
"cntv_cval_el0", "cntkctl_el1", "sp_el0", "isr_el1", ""
panic_msg:
.asciz "PANIC in EL3 at x30 = 0x"
excpt_msg:
.asciz "Unhandled Exception in EL3.\nx30 =\t\t0x"
intr_excpt_msg:
.asciz "Unhandled Interrupt Exception in EL3.\nx30 =\t\t0x"
/*
* Helper function to print newline to console.
*/
func print_newline
mov x0, '\n'
b plat_crash_console_putc
endfunc print_newline
/*
* Helper function to print from crash buf.
* The print loop is controlled by the buf size and
* ascii reg name list which is passed in x6. The
* function returns the crash buf address in x0.
* Clobbers : x0 - x7, sp
*/
func size_controlled_print
/* Save the lr */
mov sp, x30
/* load the crash buf address */
mrs x7, tpidr_el3
test_size_list:
/* Calculate x5 always as it will be clobbered by asm_print_hex */
mrs x5, tpidr_el3
add x5, x5, #CPU_DATA_CRASH_BUF_SIZE
/* Test whether we have reached end of crash buf */
cmp x7, x5
b.eq exit_size_print
ldrb w4, [x6]
/* Test whether we are at end of list */
cbz w4, exit_size_print
mov x4, x6
/* asm_print_str updates x4 to point to next entry in list */
bl asm_print_str
/* update x6 with the updated list pointer */
mov x6, x4
adr x4, print_spacer
bl asm_print_str
ldr x4, [x7], #REGSZ
bl asm_print_hex
bl print_newline
b test_size_list
exit_size_print:
mov x30, sp
ret
endfunc size_controlled_print
/*
* Helper function to store x8 - x15 registers to
* the crash buf. The system registers values are
* copied to x8 to x15 by the caller which are then
* copied to the crash buf by this function.
* x0 points to the crash buf. It then calls
* size_controlled_print to print to console.
* Clobbers : x0 - x7, sp
*/
func str_in_crash_buf_print
/* restore the crash buf address in x0 */
mrs x0, tpidr_el3
stp x8, x9, [x0]
stp x10, x11, [x0, #REGSZ * 2]
stp x12, x13, [x0, #REGSZ * 4]
stp x14, x15, [x0, #REGSZ * 6]
b size_controlled_print
endfunc str_in_crash_buf_print
/* ------------------------------------------------------
* This macro calculates the offset to crash buf from
* cpu_data and stores it in tpidr_el3. It also saves x0
* and x1 in the crash buf by using sp as a temporary
* register.
* ------------------------------------------------------
*/
.macro prepare_crash_buf_save_x0_x1
/* we can corrupt this reg to free up x0 */
mov sp, x0
/* tpidr_el3 contains the address to cpu_data structure */
mrs x0, tpidr_el3
/* Calculate the Crash buffer offset in cpu_data */
add x0, x0, #CPU_DATA_CRASH_BUF_OFFSET
/* Store crash buffer address in tpidr_el3 */
msr tpidr_el3, x0
str x1, [x0, #REGSZ]
mov x1, sp
str x1, [x0]
.endm
/* -----------------------------------------------------
* This function allows to report a crash (if crash
* reporting is enabled) when an unhandled exception
* occurs. It prints the CPU state via the crash console
* making use of the crash buf. This function will
* not return.
* -----------------------------------------------------
*/
func report_unhandled_exception
prepare_crash_buf_save_x0_x1
adr x0, excpt_msg
mov sp, x0
/* This call will not return */
b do_crash_reporting
endfunc report_unhandled_exception
/* -----------------------------------------------------
* This function allows to report a crash (if crash
* reporting is enabled) when an unhandled interrupt
* occurs. It prints the CPU state via the crash console
* making use of the crash buf. This function will
* not return.
* -----------------------------------------------------
*/
func report_unhandled_interrupt
prepare_crash_buf_save_x0_x1
adr x0, intr_excpt_msg
mov sp, x0
/* This call will not return */
b do_crash_reporting
endfunc report_unhandled_interrupt
/* -----------------------------------------------------
* This function allows to report a crash (if crash
* reporting is enabled) when panic() is invoked from
* C Runtime. It prints the CPU state via the crash
* console making use of the crash buf. This function
* will not return.
* -----------------------------------------------------
*/
func el3_panic
msr spsel, #1
prepare_crash_buf_save_x0_x1
adr x0, panic_msg
mov sp, x0
/* This call will not return */
b do_crash_reporting
endfunc el3_panic
/* ------------------------------------------------------------
* The common crash reporting functionality. It requires x0
* and x1 has already been stored in crash buf, sp points to
* crash message and tpidr_el3 contains the crash buf address.
* The function does the following:
* - Retrieve the crash buffer from tpidr_el3
* - Store x2 to x6 in the crash buffer
* - Initialise the crash console.
* - Print the crash message by using the address in sp.
* - Print x30 value to the crash console.
* - Print x0 - x7 from the crash buf to the crash console.
* - Print x8 - x29 (in groups of 8 registers) using the
* crash buf to the crash console.
* - Print el3 sys regs (in groups of 8 registers) using the
* crash buf to the crash console.
* - Print non el3 sys regs (in groups of 8 registers) using
* the crash buf to the crash console.
* ------------------------------------------------------------
*/
func do_crash_reporting
/* Retrieve the crash buf from tpidr_el3 */
mrs x0, tpidr_el3
/* Store x2 - x6, x30 in the crash buffer */
stp x2, x3, [x0, #REGSZ * 2]
stp x4, x5, [x0, #REGSZ * 4]
stp x6, x30, [x0, #REGSZ * 6]
/* Initialize the crash console */
bl plat_crash_console_init
/* Verify the console is initialized */
cbz x0, crash_panic
/* Print the crash message. sp points to the crash message */
mov x4, sp
bl asm_print_str
/* load the crash buf address */
mrs x0, tpidr_el3
/* report x30 first from the crash buf */
ldr x4, [x0, #REGSZ * 7]
bl asm_print_hex
bl print_newline
/* Load the crash buf address */
mrs x0, tpidr_el3
/* Now mov x7 into crash buf */
str x7, [x0, #REGSZ * 7]
/* Report x0 - x29 values stored in crash buf*/
/* Store the ascii list pointer in x6 */
adr x6, gp_regs
/* Print x0 to x7 from the crash buf */
bl size_controlled_print
/* Store x8 - x15 in crash buf and print */
bl str_in_crash_buf_print
/* Load the crash buf address */
mrs x0, tpidr_el3
/* Store the rest of gp regs and print */
stp x16, x17, [x0]
stp x18, x19, [x0, #REGSZ * 2]
stp x20, x21, [x0, #REGSZ * 4]
stp x22, x23, [x0, #REGSZ * 6]
bl size_controlled_print
/* Load the crash buf address */
mrs x0, tpidr_el3
stp x24, x25, [x0]
stp x26, x27, [x0, #REGSZ * 2]
stp x28, x29, [x0, #REGSZ * 4]
bl size_controlled_print
/* Print the el3 sys registers */
adr x6, el3_sys_regs
mrs x8, scr_el3
mrs x9, sctlr_el3
mrs x10, cptr_el3
mrs x11, tcr_el3
mrs x12, daif
mrs x13, mair_el3
mrs x14, spsr_el3
mrs x15, elr_el3
bl str_in_crash_buf_print
mrs x8, ttbr0_el3
mrs x9, esr_el3
mrs x10, far_el3
bl str_in_crash_buf_print
/* Print the non el3 sys registers */
adr x6, non_el3_sys_regs
mrs x8, spsr_el1
mrs x9, elr_el1
mrs x10, spsr_abt
mrs x11, spsr_und
mrs x12, spsr_irq
mrs x13, spsr_fiq
mrs x14, sctlr_el1
mrs x15, actlr_el1
bl str_in_crash_buf_print
mrs x8, cpacr_el1
mrs x9, csselr_el1
mrs x10, sp_el1
mrs x11, esr_el1
mrs x12, ttbr0_el1
mrs x13, ttbr1_el1
mrs x14, mair_el1
mrs x15, amair_el1
bl str_in_crash_buf_print
mrs x8, tcr_el1
mrs x9, tpidr_el1
mrs x10, tpidr_el0
mrs x11, tpidrro_el0
mrs x12, dacr32_el2
mrs x13, ifsr32_el2
mrs x14, par_el1
mrs x15, mpidr_el1
bl str_in_crash_buf_print
mrs x8, afsr0_el1
mrs x9, afsr1_el1
mrs x10, contextidr_el1
mrs x11, vbar_el1
mrs x12, cntp_ctl_el0
mrs x13, cntp_cval_el0
mrs x14, cntv_ctl_el0
mrs x15, cntv_cval_el0
bl str_in_crash_buf_print
mrs x8, cntkctl_el1
mrs x9, sp_el0
mrs x10, isr_el1
bl str_in_crash_buf_print
/* Get the cpu specific registers to report */
bl do_cpu_reg_dump
bl str_in_crash_buf_print
/* Print some platform registers */
plat_crash_print_regs
bl plat_crash_console_flush
/* Done reporting */
no_ret plat_panic_handler
endfunc do_crash_reporting
#else /* CRASH_REPORTING */
func report_unhandled_exception
report_unhandled_interrupt:
no_ret plat_panic_handler
endfunc report_unhandled_exception
#endif /* CRASH_REPORTING */
func crash_panic
no_ret plat_panic_handler
endfunc crash_panic

View File

@@ -0,0 +1,586 @@
/*
* Copyright (c) 2013-2018, ARM Limited and Contributors. All rights reserved.
*
* SPDX-License-Identifier: BSD-3-Clause
*/
#include <arch.h>
#include <asm_macros.S>
#include <context.h>
#include <cpu_data.h>
#include <ea_handle.h>
#include <interrupt_mgmt.h>
#include <platform_def.h>
#include <runtime_svc.h>
#include <smccc.h>
.globl runtime_exceptions
.globl sync_exception_sp_el0
.globl irq_sp_el0
.globl fiq_sp_el0
.globl serror_sp_el0
.globl sync_exception_sp_elx
.globl irq_sp_elx
.globl fiq_sp_elx
.globl serror_sp_elx
.globl sync_exception_aarch64
.globl irq_aarch64
.globl fiq_aarch64
.globl serror_aarch64
.globl sync_exception_aarch32
.globl irq_aarch32
.globl fiq_aarch32
.globl serror_aarch32
/*
* Macro that prepares entry to EL3 upon taking an exception.
*
* With RAS_EXTENSION, this macro synchronizes pending errors with an ESB
* instruction. When an error is thus synchronized, the handling is
* delegated to platform EA handler.
*
* Without RAS_EXTENSION, this macro just saves x30, and unmasks
* Asynchronous External Aborts.
*/
.macro check_and_unmask_ea
#if RAS_EXTENSION
/* Synchronize pending External Aborts */
esb
/* Unmask the SError interrupt */
msr daifclr, #DAIF_ABT_BIT
/*
* Explicitly save x30 so as to free up a register and to enable
* branching
*/
str x30, [sp, #CTX_GPREGS_OFFSET + CTX_GPREG_LR]
/* Check for SErrors synchronized by the ESB instruction */
mrs x30, DISR_EL1
tbz x30, #DISR_A_BIT, 1f
/* Save GP registers and restore them afterwards */
bl save_gp_registers
mov x0, #ERROR_EA_ESB
mrs x1, DISR_EL1
bl delegate_ea
bl restore_gp_registers
1:
#else
/* Unmask the SError interrupt */
msr daifclr, #DAIF_ABT_BIT
str x30, [sp, #CTX_GPREGS_OFFSET + CTX_GPREG_LR]
#endif
.endm
/*
* Handle External Abort by delegating to the platform's EA handler.
* Once the platform handler returns, the macro exits EL3 and returns to
* where the abort was taken from.
*
* This macro assumes that x30 is available for use.
*
* 'abort_type' is a constant passed to the platform handler, indicating
* the cause of the External Abort.
*/
.macro handle_ea abort_type
/* Save GP registers */
bl save_gp_registers
/* Setup exception class and syndrome arguments for platform handler */
mov x0, \abort_type
mrs x1, esr_el3
adr x30, el3_exit
b delegate_ea
.endm
/* ---------------------------------------------------------------------
* This macro handles Synchronous exceptions.
* Only SMC exceptions are supported.
* ---------------------------------------------------------------------
*/
.macro handle_sync_exception
#if ENABLE_RUNTIME_INSTRUMENTATION
/*
* Read the timestamp value and store it in per-cpu data. The value
* will be extracted from per-cpu data by the C level SMC handler and
* saved to the PMF timestamp region.
*/
mrs x30, cntpct_el0
str x29, [sp, #CTX_GPREGS_OFFSET + CTX_GPREG_X29]
mrs x29, tpidr_el3
str x30, [x29, #CPU_DATA_PMF_TS0_OFFSET]
ldr x29, [sp, #CTX_GPREGS_OFFSET + CTX_GPREG_X29]
#endif
mrs x30, esr_el3
ubfx x30, x30, #ESR_EC_SHIFT, #ESR_EC_LENGTH
/* Handle SMC exceptions separately from other synchronous exceptions */
cmp x30, #EC_AARCH32_SMC
b.eq smc_handler32
cmp x30, #EC_AARCH64_SMC
b.eq smc_handler64
/* Check for I/D aborts from lower EL */
cmp x30, #EC_IABORT_LOWER_EL
b.eq 1f
cmp x30, #EC_DABORT_LOWER_EL
b.ne 2f
1:
/* Test for EA bit in the instruction syndrome */
mrs x30, esr_el3
tbz x30, #ESR_ISS_EABORT_EA_BIT, 2f
handle_ea #ERROR_EA_SYNC
2:
/* Other kinds of synchronous exceptions are not handled */
ldr x30, [sp, #CTX_GPREGS_OFFSET + CTX_GPREG_LR]
b report_unhandled_exception
.endm
/* ---------------------------------------------------------------------
* This macro handles FIQ or IRQ interrupts i.e. EL3, S-EL1 and NS
* interrupts.
* ---------------------------------------------------------------------
*/
.macro handle_interrupt_exception label
bl save_gp_registers
/* Save the EL3 system registers needed to return from this exception */
mrs x0, spsr_el3
mrs x1, elr_el3
stp x0, x1, [sp, #CTX_EL3STATE_OFFSET + CTX_SPSR_EL3]
/* Switch to the runtime stack i.e. SP_EL0 */
ldr x2, [sp, #CTX_EL3STATE_OFFSET + CTX_RUNTIME_SP]
mov x20, sp
msr spsel, #0
mov sp, x2
/*
* Find out whether this is a valid interrupt type.
* If the interrupt controller reports a spurious interrupt then return
* to where we came from.
*/
bl plat_ic_get_pending_interrupt_type
cmp x0, #INTR_TYPE_INVAL
b.eq interrupt_exit_\label
/*
* Get the registered handler for this interrupt type.
* A NULL return value could be 'cause of the following conditions:
*
* a. An interrupt of a type was routed correctly but a handler for its
* type was not registered.
*
* b. An interrupt of a type was not routed correctly so a handler for
* its type was not registered.
*
* c. An interrupt of a type was routed correctly to EL3, but was
* deasserted before its pending state could be read. Another
* interrupt of a different type pended at the same time and its
* type was reported as pending instead. However, a handler for this
* type was not registered.
*
* a. and b. can only happen due to a programming error. The
* occurrence of c. could be beyond the control of Trusted Firmware.
* It makes sense to return from this exception instead of reporting an
* error.
*/
bl get_interrupt_type_handler
cbz x0, interrupt_exit_\label
mov x21, x0
mov x0, #INTR_ID_UNAVAILABLE
/* Set the current security state in the 'flags' parameter */
mrs x2, scr_el3
ubfx x1, x2, #0, #1
/* Restore the reference to the 'handle' i.e. SP_EL3 */
mov x2, x20
/* x3 will point to a cookie (not used now) */
mov x3, xzr
/* Call the interrupt type handler */
blr x21
interrupt_exit_\label:
/* Return from exception, possibly in a different security state */
b el3_exit
.endm
vector_base runtime_exceptions
/* ---------------------------------------------------------------------
* Current EL with SP_EL0 : 0x0 - 0x200
* ---------------------------------------------------------------------
*/
vector_entry sync_exception_sp_el0
/* We don't expect any synchronous exceptions from EL3 */
b report_unhandled_exception
check_vector_size sync_exception_sp_el0
vector_entry irq_sp_el0
/*
* EL3 code is non-reentrant. Any asynchronous exception is a serious
* error. Loop infinitely.
*/
b report_unhandled_interrupt
check_vector_size irq_sp_el0
vector_entry fiq_sp_el0
b report_unhandled_interrupt
check_vector_size fiq_sp_el0
vector_entry serror_sp_el0
b report_unhandled_exception
check_vector_size serror_sp_el0
/* ---------------------------------------------------------------------
* Current EL with SP_ELx: 0x200 - 0x400
* ---------------------------------------------------------------------
*/
vector_entry sync_exception_sp_elx
/*
* This exception will trigger if anything went wrong during a previous
* exception entry or exit or while handling an earlier unexpected
* synchronous exception. There is a high probability that SP_EL3 is
* corrupted.
*/
b report_unhandled_exception
check_vector_size sync_exception_sp_elx
vector_entry irq_sp_elx
b report_unhandled_interrupt
check_vector_size irq_sp_elx
vector_entry fiq_sp_elx
b report_unhandled_interrupt
check_vector_size fiq_sp_elx
vector_entry serror_sp_elx
b report_unhandled_exception
check_vector_size serror_sp_elx
/* ---------------------------------------------------------------------
* Lower EL using AArch64 : 0x400 - 0x600
* ---------------------------------------------------------------------
*/
vector_entry sync_exception_aarch64
/*
* This exception vector will be the entry point for SMCs and traps
* that are unhandled at lower ELs most commonly. SP_EL3 should point
* to a valid cpu context where the general purpose and system register
* state can be saved.
*/
check_and_unmask_ea
handle_sync_exception
check_vector_size sync_exception_aarch64
vector_entry irq_aarch64
check_and_unmask_ea
handle_interrupt_exception irq_aarch64
check_vector_size irq_aarch64
vector_entry fiq_aarch64
check_and_unmask_ea
handle_interrupt_exception fiq_aarch64
check_vector_size fiq_aarch64
vector_entry serror_aarch64
msr daifclr, #DAIF_ABT_BIT
/*
* Explicitly save x30 so as to free up a register and to enable
* branching
*/
str x30, [sp, #CTX_GPREGS_OFFSET + CTX_GPREG_LR]
handle_ea #ERROR_EA_ASYNC
check_vector_size serror_aarch64
/* ---------------------------------------------------------------------
* Lower EL using AArch32 : 0x600 - 0x800
* ---------------------------------------------------------------------
*/
vector_entry sync_exception_aarch32
/*
* This exception vector will be the entry point for SMCs and traps
* that are unhandled at lower ELs most commonly. SP_EL3 should point
* to a valid cpu context where the general purpose and system register
* state can be saved.
*/
check_and_unmask_ea
handle_sync_exception
check_vector_size sync_exception_aarch32
vector_entry irq_aarch32
check_and_unmask_ea
handle_interrupt_exception irq_aarch32
check_vector_size irq_aarch32
vector_entry fiq_aarch32
check_and_unmask_ea
handle_interrupt_exception fiq_aarch32
check_vector_size fiq_aarch32
vector_entry serror_aarch32
msr daifclr, #DAIF_ABT_BIT
/*
* Explicitly save x30 so as to free up a register and to enable
* branching
*/
str x30, [sp, #CTX_GPREGS_OFFSET + CTX_GPREG_LR]
handle_ea #ERROR_EA_ASYNC
check_vector_size serror_aarch32
/* ---------------------------------------------------------------------
* This macro takes an argument in x16 that is the index in the
* 'rt_svc_descs_indices' array, checks that the value in the array is
* valid, and loads in x15 the pointer to the handler of that service.
* ---------------------------------------------------------------------
*/
.macro load_rt_svc_desc_pointer
/* Load descriptor index from array of indices */
adr x14, rt_svc_descs_indices
ldrb w15, [x14, x16]
#if SMCCC_MAJOR_VERSION == 1
/* Any index greater than 127 is invalid. Check bit 7. */
tbnz w15, 7, smc_unknown
#elif SMCCC_MAJOR_VERSION == 2
/* Verify that the top 3 bits of the loaded index are 0 (w15 <= 31) */
cmp w15, #31
b.hi smc_unknown
#endif /* SMCCC_MAJOR_VERSION */
/*
* Get the descriptor using the index
* x11 = (base + off), w15 = index
*
* handler = (base + off) + (index << log2(size))
*/
adr x11, (__RT_SVC_DESCS_START__ + RT_SVC_DESC_HANDLE)
lsl w10, w15, #RT_SVC_SIZE_LOG2
ldr x15, [x11, w10, uxtw]
.endm
/* ---------------------------------------------------------------------
* The following code handles secure monitor calls.
* Depending upon the execution state from where the SMC has been
* invoked, it frees some general purpose registers to perform the
* remaining tasks. They involve finding the runtime service handler
* that is the target of the SMC & switching to runtime stacks (SP_EL0)
* before calling the handler.
*
* Note that x30 has been explicitly saved and can be used here
* ---------------------------------------------------------------------
*/
func smc_handler
smc_handler32:
/* Check whether aarch32 issued an SMC64 */
tbnz x0, #FUNCID_CC_SHIFT, smc_prohibited
smc_handler64:
/*
* Populate the parameters for the SMC handler.
* We already have x0-x4 in place. x5 will point to a cookie (not used
* now). x6 will point to the context structure (SP_EL3) and x7 will
* contain flags we need to pass to the handler.
*
* Save x4-x29 and sp_el0.
*/
stp x4, x5, [sp, #CTX_GPREGS_OFFSET + CTX_GPREG_X4]
stp x6, x7, [sp, #CTX_GPREGS_OFFSET + CTX_GPREG_X6]
stp x8, x9, [sp, #CTX_GPREGS_OFFSET + CTX_GPREG_X8]
stp x10, x11, [sp, #CTX_GPREGS_OFFSET + CTX_GPREG_X10]
stp x12, x13, [sp, #CTX_GPREGS_OFFSET + CTX_GPREG_X12]
stp x14, x15, [sp, #CTX_GPREGS_OFFSET + CTX_GPREG_X14]
stp x16, x17, [sp, #CTX_GPREGS_OFFSET + CTX_GPREG_X16]
stp x18, x19, [sp, #CTX_GPREGS_OFFSET + CTX_GPREG_X18]
stp x20, x21, [sp, #CTX_GPREGS_OFFSET + CTX_GPREG_X20]
stp x22, x23, [sp, #CTX_GPREGS_OFFSET + CTX_GPREG_X22]
stp x24, x25, [sp, #CTX_GPREGS_OFFSET + CTX_GPREG_X24]
stp x26, x27, [sp, #CTX_GPREGS_OFFSET + CTX_GPREG_X26]
stp x28, x29, [sp, #CTX_GPREGS_OFFSET + CTX_GPREG_X28]
mrs x18, sp_el0
str x18, [sp, #CTX_GPREGS_OFFSET + CTX_GPREG_SP_EL0]
mov x5, xzr
mov x6, sp
#if SMCCC_MAJOR_VERSION == 1
/* Get the unique owning entity number */
ubfx x16, x0, #FUNCID_OEN_SHIFT, #FUNCID_OEN_WIDTH
ubfx x15, x0, #FUNCID_TYPE_SHIFT, #FUNCID_TYPE_WIDTH
orr x16, x16, x15, lsl #FUNCID_OEN_WIDTH
load_rt_svc_desc_pointer
#elif SMCCC_MAJOR_VERSION == 2
/* Bit 31 must be set */
tbz x0, #FUNCID_TYPE_SHIFT, smc_unknown
/*
* Check MSB of namespace to decide between compatibility/vendor and
* SPCI/SPRT
*/
tbz x0, #(FUNCID_NAMESPACE_SHIFT + 1), compat_or_vendor
/* Namespaces SPRT and SPCI currently unimplemented */
b smc_unknown
compat_or_vendor:
/* Namespace is b'00 (compatibility) or b'01 (vendor) */
/*
* Add the LSB of the namespace (bit [28]) to the OEN [27:24] to create
* a 5-bit index into the rt_svc_descs_indices array.
*
* The low 16 entries of the rt_svc_descs_indices array correspond to
* OENs of the compatibility namespace and the top 16 entries of the
* array are assigned to the vendor namespace descriptor.
*/
ubfx x16, x0, #FUNCID_OEN_SHIFT, #(FUNCID_OEN_WIDTH + 1)
load_rt_svc_desc_pointer
#endif /* SMCCC_MAJOR_VERSION */
/*
* Restore the saved C runtime stack value which will become the new
* SP_EL0 i.e. EL3 runtime stack. It was saved in the 'cpu_context'
* structure prior to the last ERET from EL3.
*/
ldr x12, [x6, #CTX_EL3STATE_OFFSET + CTX_RUNTIME_SP]
/* Switch to SP_EL0 */
msr spsel, #0
/*
* Save the SPSR_EL3, ELR_EL3, & SCR_EL3 in case there is a world
* switch during SMC handling.
* TODO: Revisit if all system registers can be saved later.
*/
mrs x16, spsr_el3
mrs x17, elr_el3
mrs x18, scr_el3
stp x16, x17, [x6, #CTX_EL3STATE_OFFSET + CTX_SPSR_EL3]
str x18, [x6, #CTX_EL3STATE_OFFSET + CTX_SCR_EL3]
/* Copy SCR_EL3.NS bit to the flag to indicate caller's security */
bfi x7, x18, #0, #1
mov sp, x12
/*
* Call the Secure Monitor Call handler and then drop directly into
* el3_exit() which will program any remaining architectural state
* prior to issuing the ERET to the desired lower EL.
*/
#if DEBUG
cbz x15, rt_svc_fw_critical_error
#endif
blr x15
b el3_exit
smc_unknown:
/*
* Unknown SMC call. Populate return value with SMC_UNK, restore
* GP registers, and return to caller.
*/
mov x0, #SMC_UNK
str x0, [sp, #CTX_GPREGS_OFFSET + CTX_GPREG_X0]
b restore_gp_registers_eret
smc_prohibited:
ldr x30, [sp, #CTX_GPREGS_OFFSET + CTX_GPREG_LR]
mov x0, #SMC_UNK
eret
rt_svc_fw_critical_error:
/* Switch to SP_ELx */
msr spsel, #1
no_ret report_unhandled_exception
endfunc smc_handler
/*
* Delegate External Abort handling to platform's EA handler. This function
* assumes that all GP registers have been saved by the caller.
*
* x0: EA reason
* x1: EA syndrome
*/
func delegate_ea
/* Save EL3 state */
mrs x2, spsr_el3
mrs x3, elr_el3
stp x2, x3, [sp, #CTX_EL3STATE_OFFSET + CTX_SPSR_EL3]
/*
* Save ESR as handling might involve lower ELs, and returning back to
* EL3 from there would trample the original ESR.
*/
mrs x4, scr_el3
mrs x5, esr_el3
stp x4, x5, [sp, #CTX_EL3STATE_OFFSET + CTX_SCR_EL3]
/*
* Setup rest of arguments, and call platform External Abort handler.
*
* x0: EA reason (already in place)
* x1: Exception syndrome (already in place).
* x2: Cookie (unused for now).
* x3: Context pointer.
* x4: Flags (security state from SCR for now).
*/
mov x2, xzr
mov x3, sp
ubfx x4, x4, #0, #1
/* Switch to runtime stack */
ldr x5, [sp, #CTX_EL3STATE_OFFSET + CTX_RUNTIME_SP]
msr spsel, #0
mov sp, x5
mov x29, x30
bl plat_ea_handler
mov x30, x29
/* Make SP point to context */
msr spsel, #1
/* Restore EL3 state */
ldp x1, x2, [sp, #CTX_EL3STATE_OFFSET + CTX_SPSR_EL3]
msr spsr_el3, x1
msr elr_el3, x2
/* Restore ESR_EL3 and SCR_EL3 */
ldp x3, x4, [sp, #CTX_EL3STATE_OFFSET + CTX_SCR_EL3]
msr scr_el3, x3
msr esr_el3, x4
ret
endfunc delegate_ea

View File

@@ -0,0 +1,268 @@
/*
* Copyright (c) 2013-2018, ARM Limited and Contributors. All rights reserved.
*
* SPDX-License-Identifier: BSD-3-Clause
*/
#include <platform_def.h>
#include <xlat_tables_defs.h>
OUTPUT_FORMAT(PLATFORM_LINKER_FORMAT)
OUTPUT_ARCH(PLATFORM_LINKER_ARCH)
ENTRY(bl31_entrypoint)
MEMORY {
RAM (rwx): ORIGIN = BL31_BASE, LENGTH = BL31_LIMIT - BL31_BASE
}
#ifdef PLAT_EXTRA_LD_SCRIPT
#include <plat.ld.S>
#endif
SECTIONS
{
. = BL31_BASE;
ASSERT(. == ALIGN(PAGE_SIZE),
"BL31_BASE address is not aligned on a page boundary.")
#if SEPARATE_CODE_AND_RODATA
.text . : {
__TEXT_START__ = .;
*bl31_entrypoint.o(.text*)
*(.text*)
*(.vectors)
. = NEXT(PAGE_SIZE);
__TEXT_END__ = .;
} >RAM
.rodata . : {
__RODATA_START__ = .;
*(.rodata*)
/* Ensure 8-byte alignment for descriptors and ensure inclusion */
. = ALIGN(8);
__RT_SVC_DESCS_START__ = .;
KEEP(*(rt_svc_descs))
__RT_SVC_DESCS_END__ = .;
#if ENABLE_PMF
/* Ensure 8-byte alignment for descriptors and ensure inclusion */
. = ALIGN(8);
__PMF_SVC_DESCS_START__ = .;
KEEP(*(pmf_svc_descs))
__PMF_SVC_DESCS_END__ = .;
#endif /* ENABLE_PMF */
/*
* Ensure 8-byte alignment for cpu_ops so that its fields are also
* aligned. Also ensure cpu_ops inclusion.
*/
. = ALIGN(8);
__CPU_OPS_START__ = .;
KEEP(*(cpu_ops))
__CPU_OPS_END__ = .;
/* Place pubsub sections for events */
. = ALIGN(8);
#include <pubsub_events.h>
. = NEXT(PAGE_SIZE);
__RODATA_END__ = .;
} >RAM
#else
ro . : {
__RO_START__ = .;
*bl31_entrypoint.o(.text*)
*(.text*)
*(.rodata*)
/* Ensure 8-byte alignment for descriptors and ensure inclusion */
. = ALIGN(8);
__RT_SVC_DESCS_START__ = .;
KEEP(*(rt_svc_descs))
__RT_SVC_DESCS_END__ = .;
#if ENABLE_PMF
/* Ensure 8-byte alignment for descriptors and ensure inclusion */
. = ALIGN(8);
__PMF_SVC_DESCS_START__ = .;
KEEP(*(pmf_svc_descs))
__PMF_SVC_DESCS_END__ = .;
#endif /* ENABLE_PMF */
/*
* Ensure 8-byte alignment for cpu_ops so that its fields are also
* aligned. Also ensure cpu_ops inclusion.
*/
. = ALIGN(8);
__CPU_OPS_START__ = .;
KEEP(*(cpu_ops))
__CPU_OPS_END__ = .;
/* Place pubsub sections for events */
. = ALIGN(8);
#include <pubsub_events.h>
*(.vectors)
__RO_END_UNALIGNED__ = .;
/*
* Memory page(s) mapped to this section will be marked as read-only,
* executable. No RW data from the next section must creep in.
* Ensure the rest of the current memory page is unused.
*/
. = NEXT(PAGE_SIZE);
__RO_END__ = .;
} >RAM
#endif
ASSERT(__CPU_OPS_END__ > __CPU_OPS_START__,
"cpu_ops not defined for this platform.")
#if ENABLE_SPM
/*
* Exception vectors of the SPM shim layer. They must be aligned to a 2K
* address, but we need to place them in a separate page so that we can set
* individual permissions to them, so the actual alignment needed is 4K.
*
* There's no need to include this into the RO section of BL31 because it
* doesn't need to be accessed by BL31.
*/
spm_shim_exceptions : ALIGN(PAGE_SIZE) {
__SPM_SHIM_EXCEPTIONS_START__ = .;
*(.spm_shim_exceptions)
. = NEXT(PAGE_SIZE);
__SPM_SHIM_EXCEPTIONS_END__ = .;
} >RAM
#endif
/*
* Define a linker symbol to mark start of the RW memory area for this
* image.
*/
__RW_START__ = . ;
/*
* .data must be placed at a lower address than the stacks if the stack
* protector is enabled. Alternatively, the .data.stack_protector_canary
* section can be placed independently of the main .data section.
*/
.data . : {
__DATA_START__ = .;
*(.data*)
__DATA_END__ = .;
} >RAM
#ifdef BL31_PROGBITS_LIMIT
ASSERT(. <= BL31_PROGBITS_LIMIT, "BL31 progbits has exceeded its limit.")
#endif
stacks (NOLOAD) : {
__STACKS_START__ = .;
*(tzfw_normal_stacks)
__STACKS_END__ = .;
} >RAM
/*
* The .bss section gets initialised to 0 at runtime.
* Its base address should be 16-byte aligned for better performance of the
* zero-initialization code.
*/
.bss (NOLOAD) : ALIGN(16) {
__BSS_START__ = .;
*(.bss*)
*(COMMON)
#if !USE_COHERENT_MEM
/*
* Bakery locks are stored in normal .bss memory
*
* Each lock's data is spread across multiple cache lines, one per CPU,
* but multiple locks can share the same cache line.
* The compiler will allocate enough memory for one CPU's bakery locks,
* the remaining cache lines are allocated by the linker script
*/
. = ALIGN(CACHE_WRITEBACK_GRANULE);
__BAKERY_LOCK_START__ = .;
*(bakery_lock)
. = ALIGN(CACHE_WRITEBACK_GRANULE);
__PERCPU_BAKERY_LOCK_SIZE__ = ABSOLUTE(. - __BAKERY_LOCK_START__);
. = . + (__PERCPU_BAKERY_LOCK_SIZE__ * (PLATFORM_CORE_COUNT - 1));
__BAKERY_LOCK_END__ = .;
#ifdef PLAT_PERCPU_BAKERY_LOCK_SIZE
ASSERT(__PERCPU_BAKERY_LOCK_SIZE__ == PLAT_PERCPU_BAKERY_LOCK_SIZE,
"PLAT_PERCPU_BAKERY_LOCK_SIZE does not match bakery lock requirements");
#endif
#endif
#if ENABLE_PMF
/*
* Time-stamps are stored in normal .bss memory
*
* The compiler will allocate enough memory for one CPU's time-stamps,
* the remaining memory for other CPU's is allocated by the
* linker script
*/
. = ALIGN(CACHE_WRITEBACK_GRANULE);
__PMF_TIMESTAMP_START__ = .;
KEEP(*(pmf_timestamp_array))
. = ALIGN(CACHE_WRITEBACK_GRANULE);
__PMF_PERCPU_TIMESTAMP_END__ = .;
__PERCPU_TIMESTAMP_SIZE__ = ABSOLUTE(. - __PMF_TIMESTAMP_START__);
. = . + (__PERCPU_TIMESTAMP_SIZE__ * (PLATFORM_CORE_COUNT - 1));
__PMF_TIMESTAMP_END__ = .;
#endif /* ENABLE_PMF */
__BSS_END__ = .;
} >RAM
/*
* The xlat_table section is for full, aligned page tables (4K).
* Removing them from .bss avoids forcing 4K alignment on
* the .bss section. The tables are initialized to zero by the translation
* tables library.
*/
xlat_table (NOLOAD) : {
*(xlat_table)
} >RAM
#if USE_COHERENT_MEM
/*
* The base address of the coherent memory section must be page-aligned (4K)
* to guarantee that the coherent data are stored on their own pages and
* are not mixed with normal data. This is required to set up the correct
* memory attributes for the coherent data page tables.
*/
coherent_ram (NOLOAD) : ALIGN(PAGE_SIZE) {
__COHERENT_RAM_START__ = .;
/*
* Bakery locks are stored in coherent memory
*
* Each lock's data is contiguous and fully allocated by the compiler
*/
*(bakery_lock)
*(tzfw_coherent_mem)
__COHERENT_RAM_END_UNALIGNED__ = .;
/*
* Memory page(s) mapped to this section will be marked
* as device memory. No other unexpected data must creep in.
* Ensure the rest of the current memory page is unused.
*/
. = NEXT(PAGE_SIZE);
__COHERENT_RAM_END__ = .;
} >RAM
#endif
/*
* Define a linker symbol to mark end of the RW memory area for this
* image.
*/
__RW_END__ = .;
__BL31_END__ = .;
__BSS_SIZE__ = SIZEOF(.bss);
#if USE_COHERENT_MEM
__COHERENT_RAM_UNALIGNED_SIZE__ =
__COHERENT_RAM_END_UNALIGNED__ - __COHERENT_RAM_START__;
#endif
ASSERT(. <= BL31_LIMIT, "BL31 image has exceeded its limit.")
}

View File

@@ -0,0 +1,82 @@
#
# Copyright (c) 2013-2018, ARM Limited and Contributors. All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
#
################################################################################
# Include SPM Makefile
################################################################################
ifeq (${ENABLE_SPM},1)
$(info Including SPM makefile)
include services/std_svc/spm/spm.mk
endif
include lib/psci/psci_lib.mk
BL31_SOURCES += bl31/bl31_main.c \
bl31/interrupt_mgmt.c \
bl31/aarch64/bl31_entrypoint.S \
bl31/aarch64/runtime_exceptions.S \
bl31/aarch64/crash_reporting.S \
bl31/bl31_context_mgmt.c \
common/runtime_svc.c \
plat/common/aarch64/platform_mp_stack.S \
services/arm_arch_svc/arm_arch_svc_setup.c \
services/std_svc/std_svc_setup.c \
${PSCI_LIB_SOURCES} \
${SPM_SOURCES} \
ifeq (${ENABLE_PMF}, 1)
BL31_SOURCES += lib/pmf/pmf_main.c
endif
ifeq (${EL3_EXCEPTION_HANDLING},1)
BL31_SOURCES += bl31/ehf.c
endif
ifeq (${SDEI_SUPPORT},1)
ifeq (${EL3_EXCEPTION_HANDLING},0)
$(error EL3_EXCEPTION_HANDLING must be 1 for SDEI support)
endif
BL31_SOURCES += services/std_svc/sdei/sdei_event.c \
services/std_svc/sdei/sdei_intr_mgmt.c \
services/std_svc/sdei/sdei_main.c \
services/std_svc/sdei/sdei_state.c
endif
ifeq (${ENABLE_SPE_FOR_LOWER_ELS},1)
BL31_SOURCES += lib/extensions/spe/spe.c
endif
ifeq (${ENABLE_AMU},1)
BL31_SOURCES += lib/extensions/amu/aarch64/amu.c \
lib/extensions/amu/aarch64/amu_helpers.S
endif
ifeq (${ENABLE_SVE_FOR_NS},1)
BL31_SOURCES += lib/extensions/sve/sve.c
endif
ifeq (${WORKAROUND_CVE_2017_5715},1)
BL31_SOURCES += lib/cpus/aarch64/wa_cve_2017_5715_bpiall.S \
lib/cpus/aarch64/wa_cve_2017_5715_mmu.S
endif
BL31_LINKERFILE := bl31/bl31.ld.S
# Flag used to indicate if Crash reporting via console should be included
# in BL31. This defaults to being present in DEBUG builds only
ifndef CRASH_REPORTING
CRASH_REPORTING := $(DEBUG)
endif
$(eval $(call assert_boolean,CRASH_REPORTING))
$(eval $(call assert_boolean,EL3_EXCEPTION_HANDLING))
$(eval $(call assert_boolean,SDEI_SUPPORT))
$(eval $(call add_define,CRASH_REPORTING))
$(eval $(call add_define,EL3_EXCEPTION_HANDLING))
$(eval $(call add_define,SDEI_SUPPORT))

View File

@@ -0,0 +1,130 @@
/*
* Copyright (c) 2013-2018, ARM Limited and Contributors. All rights reserved.
*
* SPDX-License-Identifier: BSD-3-Clause
*/
#include <assert.h>
#include <bl31.h>
#include <bl_common.h>
#include <context.h>
#include <context_mgmt.h>
#include <cpu_data.h>
#include <platform.h>
/*******************************************************************************
* This function returns a pointer to the most recent 'cpu_context' structure
* for the calling CPU that was set as the context for the specified security
* state. NULL is returned if no such structure has been specified.
******************************************************************************/
void *cm_get_context(uint32_t security_state)
{
assert(security_state <= NON_SECURE);
return get_cpu_data(cpu_context[security_state]);
}
/*******************************************************************************
* This function sets the pointer to the current 'cpu_context' structure for the
* specified security state for the calling CPU
******************************************************************************/
void cm_set_context(void *context, uint32_t security_state)
{
assert(security_state <= NON_SECURE);
set_cpu_data(cpu_context[security_state], context);
}
/*******************************************************************************
* This function returns a pointer to the most recent 'cpu_context' structure
* for the CPU identified by `cpu_idx` that was set as the context for the
* specified security state. NULL is returned if no such structure has been
* specified.
******************************************************************************/
void *cm_get_context_by_index(unsigned int cpu_idx,
unsigned int security_state)
{
assert(sec_state_is_valid(security_state));
return get_cpu_data_by_index(cpu_idx, cpu_context[security_state]);
}
/*******************************************************************************
* This function sets the pointer to the current 'cpu_context' structure for the
* specified security state for the CPU identified by CPU index.
******************************************************************************/
void cm_set_context_by_index(unsigned int cpu_idx, void *context,
unsigned int security_state)
{
assert(sec_state_is_valid(security_state));
set_cpu_data_by_index(cpu_idx, cpu_context[security_state], context);
}
#if !ERROR_DEPRECATED
/*
* These context management helpers are deprecated but are maintained for use
* by SPDs which have not migrated to the new API. If ERROR_DEPRECATED
* is enabled, these are excluded from the build so as to force users to
* migrate to the new API.
*/
/*******************************************************************************
* This function returns a pointer to the most recent 'cpu_context' structure
* for the CPU identified by MPIDR that was set as the context for the specified
* security state. NULL is returned if no such structure has been specified.
******************************************************************************/
void *cm_get_context_by_mpidr(uint64_t mpidr, uint32_t security_state)
{
assert(sec_state_is_valid(security_state));
/*
* Suppress deprecated declaration warning in compatibility function
*/
#pragma GCC diagnostic push
#pragma GCC diagnostic ignored "-Wdeprecated-declarations"
return cm_get_context_by_index(platform_get_core_pos(mpidr), security_state);
#pragma GCC diagnostic pop
}
/*******************************************************************************
* This function sets the pointer to the current 'cpu_context' structure for the
* specified security state for the CPU identified by MPIDR
******************************************************************************/
void cm_set_context_by_mpidr(uint64_t mpidr, void *context, uint32_t security_state)
{
assert(sec_state_is_valid(security_state));
/*
* Suppress deprecated declaration warning in compatibility function
*/
#pragma GCC diagnostic push
#pragma GCC diagnostic ignored "-Wdeprecated-declarations"
cm_set_context_by_index(platform_get_core_pos(mpidr),
context, security_state);
#pragma GCC diagnostic pop
}
/*******************************************************************************
* The following function provides a compatibility function for SPDs using the
* existing cm library routines. This function is expected to be invoked for
* initializing the cpu_context for the CPU specified by MPIDR for first use.
******************************************************************************/
void cm_init_context(uint64_t mpidr, const entry_point_info_t *ep)
{
if ((mpidr & MPIDR_AFFINITY_MASK) ==
(read_mpidr_el1() & MPIDR_AFFINITY_MASK))
cm_init_my_context(ep);
else {
/*
* Suppress deprecated declaration warning in compatibility
* function
*/
#pragma GCC diagnostic push
#pragma GCC diagnostic ignored "-Wdeprecated-declarations"
cm_init_context_by_index(platform_get_core_pos(mpidr), ep);
#pragma GCC diagnostic pop
}
}
#endif /* ERROR_DEPRECATED */

View File

@@ -0,0 +1,187 @@
/*
* Copyright (c) 2013-2018, ARM Limited and Contributors. All rights reserved.
*
* SPDX-License-Identifier: BSD-3-Clause
*/
#include <arch.h>
#include <arch_helpers.h>
#include <assert.h>
#include <bl31.h>
#include <bl_common.h>
#include <console.h>
#include <context_mgmt.h>
#include <debug.h>
#include <ehf.h>
#include <platform.h>
#include <pmf.h>
#include <runtime_instr.h>
#include <runtime_svc.h>
#include <std_svc.h>
#include <string.h>
#if ENABLE_RUNTIME_INSTRUMENTATION
PMF_REGISTER_SERVICE_SMC(rt_instr_svc, PMF_RT_INSTR_SVC_ID,
RT_INSTR_TOTAL_IDS, PMF_STORE_ENABLE)
#endif
/*******************************************************************************
* This function pointer is used to initialise the BL32 image. It's initialized
* by SPD calling bl31_register_bl32_init after setting up all things necessary
* for SP execution. In cases where both SPD and SP are absent, or when SPD
* finds it impossible to execute SP, this pointer is left as NULL
******************************************************************************/
static int32_t (*bl32_init)(void);
/*******************************************************************************
* Variable to indicate whether next image to execute after BL31 is BL33
* (non-secure & default) or BL32 (secure).
******************************************************************************/
static uint32_t next_image_type = NON_SECURE;
/*
* Implement the ARM Standard Service function to get arguments for a
* particular service.
*/
uintptr_t get_arm_std_svc_args(unsigned int svc_mask)
{
/* Setup the arguments for PSCI Library */
DEFINE_STATIC_PSCI_LIB_ARGS_V1(psci_args, bl31_warm_entrypoint);
/* PSCI is the only ARM Standard Service implemented */
assert(svc_mask == PSCI_FID_MASK);
return (uintptr_t)&psci_args;
}
/*******************************************************************************
* Simple function to initialise all BL31 helper libraries.
******************************************************************************/
void bl31_lib_init(void)
{
cm_init();
}
/*******************************************************************************
* BL31 is responsible for setting up the runtime services for the primary cpu
* before passing control to the bootloader or an Operating System. This
* function calls runtime_svc_init() which initializes all registered runtime
* services. The run time services would setup enough context for the core to
* swtich to the next exception level. When this function returns, the core will
* switch to the programmed exception level via. an ERET.
******************************************************************************/
void bl31_main(void)
{
NOTICE("BL31: %s\n", version_string);
NOTICE("BL31: %s\n", build_message);
/* Perform platform setup in BL31 */
bl31_platform_setup();
/* Initialise helper libraries */
bl31_lib_init();
#if EL3_EXCEPTION_HANDLING
INFO("BL31: Initialising Exception Handling Framework\n");
ehf_init();
#endif
/* Initialize the runtime services e.g. psci. */
INFO("BL31: Initializing runtime services\n");
runtime_svc_init();
/*
* All the cold boot actions on the primary cpu are done. We now need to
* decide which is the next image (BL32 or BL33) and how to execute it.
* If the SPD runtime service is present, it would want to pass control
* to BL32 first in S-EL1. In that case, SPD would have registered a
* function to intialize bl32 where it takes responsibility of entering
* S-EL1 and returning control back to bl31_main. Once this is done we
* can prepare entry into BL33 as normal.
*/
/*
* If SPD had registerd an init hook, invoke it.
*/
if (bl32_init) {
INFO("BL31: Initializing BL32\n");
(*bl32_init)();
}
/*
* We are ready to enter the next EL. Prepare entry into the image
* corresponding to the desired security state after the next ERET.
*/
bl31_prepare_next_image_entry();
console_flush();
/*
* Perform any platform specific runtime setup prior to cold boot exit
* from BL31
*/
bl31_plat_runtime_setup();
}
/*******************************************************************************
* Accessor functions to help runtime services decide which image should be
* executed after BL31. This is BL33 or the non-secure bootloader image by
* default but the Secure payload dispatcher could override this by requesting
* an entry into BL32 (Secure payload) first. If it does so then it should use
* the same API to program an entry into BL33 once BL32 initialisation is
* complete.
******************************************************************************/
void bl31_set_next_image_type(uint32_t security_state)
{
assert(sec_state_is_valid(security_state));
next_image_type = security_state;
}
uint32_t bl31_get_next_image_type(void)
{
return next_image_type;
}
/*******************************************************************************
* This function programs EL3 registers and performs other setup to enable entry
* into the next image after BL31 at the next ERET.
******************************************************************************/
void bl31_prepare_next_image_entry(void)
{
entry_point_info_t *next_image_info;
uint32_t image_type;
#if CTX_INCLUDE_AARCH32_REGS
/*
* Ensure that the build flag to save AArch32 system registers in CPU
* context is not set for AArch64-only platforms.
*/
if (EL_IMPLEMENTED(1) == EL_IMPL_A64ONLY) {
ERROR("EL1 supports AArch64-only. Please set build flag "
"CTX_INCLUDE_AARCH32_REGS = 0");
panic();
}
#endif
/* Determine which image to execute next */
image_type = bl31_get_next_image_type();
/* Program EL3 registers to enable entry into the next EL */
next_image_info = bl31_plat_get_next_image_ep_info(image_type);
assert(next_image_info);
assert(image_type == GET_SECURITY_STATE(next_image_info->h.attr));
INFO("BL31: Preparing for EL3 exit to %s world\n",
(image_type == SECURE) ? "secure" : "normal");
print_entry_point_info(next_image_info);
cm_init_my_context(next_image_info);
cm_prepare_el3_exit(image_type);
}
/*******************************************************************************
* This function initializes the pointer to BL32 init function. This is expected
* to be called by the SPD after it finishes all its initialization
******************************************************************************/
void bl31_register_bl32_init(int32_t (*func)(void))
{
bl32_init = func;
}

View File

@@ -0,0 +1,517 @@
/*
* Copyright (c) 2017-2018, ARM Limited and Contributors. All rights reserved.
*
* SPDX-License-Identifier: BSD-3-Clause
*/
/*
* Exception handlers at EL3, their priority levels, and management.
*/
#include <assert.h>
#include <context.h>
#include <context_mgmt.h>
#include <cpu_data.h>
#include <debug.h>
#include <ehf.h>
#include <gic_common.h>
#include <interrupt_mgmt.h>
#include <platform.h>
#include <pubsub_events.h>
/* Output EHF logs as verbose */
#define EHF_LOG(...) VERBOSE("EHF: " __VA_ARGS__)
#define EHF_INVALID_IDX (-1)
/* For a valid handler, return the actual function pointer; otherwise, 0. */
#define RAW_HANDLER(h) \
((ehf_handler_t) ((h & _EHF_PRI_VALID) ? (h & ~_EHF_PRI_VALID) : 0))
#define PRI_BIT(idx) (((ehf_pri_bits_t) 1) << idx)
/*
* Convert index into secure priority using the platform-defined priority bits
* field.
*/
#define IDX_TO_PRI(idx) \
((idx << (7 - exception_data.pri_bits)) & 0x7f)
/* Check whether a given index is valid */
#define IS_IDX_VALID(idx) \
((exception_data.ehf_priorities[idx].ehf_handler & _EHF_PRI_VALID) != 0)
/* Returns whether given priority is in secure priority range */
#define IS_PRI_SECURE(pri) ((pri & 0x80) == 0)
/* To be defined by the platform */
extern const ehf_priorities_t exception_data;
/* Translate priority to the index in the priority array */
static int pri_to_idx(unsigned int priority)
{
int idx;
idx = EHF_PRI_TO_IDX(priority, exception_data.pri_bits);
assert((idx >= 0) && (idx < exception_data.num_priorities));
assert(IS_IDX_VALID(idx));
return idx;
}
/* Return whether there are outstanding priority activation */
static int has_valid_pri_activations(pe_exc_data_t *pe_data)
{
return pe_data->active_pri_bits != 0;
}
static pe_exc_data_t *this_cpu_data(void)
{
return &get_cpu_data(ehf_data);
}
/*
* Return the current priority index of this CPU. If no priority is active,
* return EHF_INVALID_IDX.
*/
static int get_pe_highest_active_idx(pe_exc_data_t *pe_data)
{
if (!has_valid_pri_activations(pe_data))
return EHF_INVALID_IDX;
/* Current priority is the right-most bit */
return __builtin_ctz(pe_data->active_pri_bits);
}
/*
* Mark priority active by setting the corresponding bit in active_pri_bits and
* programming the priority mask.
*
* This API is to be used as part of delegating to lower ELs other than for
* interrupts; e.g. while handling synchronous exceptions.
*
* This API is expected to be invoked before restoring context (Secure or
* Non-secure) in preparation for the respective dispatch.
*/
void ehf_activate_priority(unsigned int priority)
{
int idx, cur_pri_idx;
unsigned int old_mask, run_pri;
pe_exc_data_t *pe_data = this_cpu_data();
/*
* Query interrupt controller for the running priority, or idle priority
* if no interrupts are being handled. The requested priority must be
* less (higher priority) than the active running priority.
*/
run_pri = plat_ic_get_running_priority();
if (priority >= run_pri) {
ERROR("Running priority higher (0x%x) than requested (0x%x)\n",
run_pri, priority);
panic();
}
/*
* If there were priority activations already, the requested priority
* must be less (higher priority) than the current highest priority
* activation so far.
*/
cur_pri_idx = get_pe_highest_active_idx(pe_data);
idx = pri_to_idx(priority);
if ((cur_pri_idx != EHF_INVALID_IDX) && (idx >= cur_pri_idx)) {
ERROR("Activation priority mismatch: req=0x%x current=0x%x\n",
priority, IDX_TO_PRI(cur_pri_idx));
panic();
}
/* Set the bit corresponding to the requested priority */
pe_data->active_pri_bits |= PRI_BIT(idx);
/*
* Program priority mask for the activated level. Check that the new
* priority mask is setting a higher priority level than the existing
* mask.
*/
old_mask = plat_ic_set_priority_mask(priority);
if (priority >= old_mask) {
ERROR("Requested priority (0x%x) lower than Priority Mask (0x%x)\n",
priority, old_mask);
panic();
}
/*
* If this is the first activation, save the priority mask. This will be
* restored after the last deactivation.
*/
if (cur_pri_idx == EHF_INVALID_IDX)
pe_data->init_pri_mask = old_mask;
EHF_LOG("activate prio=%d\n", get_pe_highest_active_idx(pe_data));
}
/*
* Mark priority inactive by clearing the corresponding bit in active_pri_bits,
* and programming the priority mask.
*
* This API is expected to be used as part of delegating to to lower ELs other
* than for interrupts; e.g. while handling synchronous exceptions.
*
* This API is expected to be invoked after saving context (Secure or
* Non-secure), having concluded the respective dispatch.
*/
void ehf_deactivate_priority(unsigned int priority)
{
int idx, cur_pri_idx;
pe_exc_data_t *pe_data = this_cpu_data();
unsigned int old_mask, run_pri;
/*
* Query interrupt controller for the running priority, or idle priority
* if no interrupts are being handled. The requested priority must be
* less (higher priority) than the active running priority.
*/
run_pri = plat_ic_get_running_priority();
if (priority >= run_pri) {
ERROR("Running priority higher (0x%x) than requested (0x%x)\n",
run_pri, priority);
panic();
}
/*
* Deactivation is allowed only when there are priority activations, and
* the deactivation priority level must match the current activated
* priority.
*/
cur_pri_idx = get_pe_highest_active_idx(pe_data);
idx = pri_to_idx(priority);
if ((cur_pri_idx == EHF_INVALID_IDX) || (idx != cur_pri_idx)) {
ERROR("Deactivation priority mismatch: req=0x%x current=0x%x\n",
priority, IDX_TO_PRI(cur_pri_idx));
panic();
}
/* Clear bit corresponding to highest priority */
pe_data->active_pri_bits &= (pe_data->active_pri_bits - 1);
/*
* Restore priority mask corresponding to the next priority, or the
* one stashed earlier if there are no more to deactivate.
*/
idx = get_pe_highest_active_idx(pe_data);
if (idx == EHF_INVALID_IDX)
old_mask = plat_ic_set_priority_mask(pe_data->init_pri_mask);
else
old_mask = plat_ic_set_priority_mask(priority);
if (old_mask > priority) {
ERROR("Deactivation priority (0x%x) lower than Priority Mask (0x%x)\n",
priority, old_mask);
panic();
}
EHF_LOG("deactivate prio=%d\n", get_pe_highest_active_idx(pe_data));
}
/*
* After leaving Non-secure world, stash current Non-secure Priority Mask, and
* set Priority Mask to the highest Non-secure priority so that Non-secure
* interrupts cannot preempt Secure execution.
*
* If the current running priority is in the secure range, or if there are
* outstanding priority activations, this function does nothing.
*
* This function subscribes to the 'cm_exited_normal_world' event published by
* the Context Management Library.
*/
static void *ehf_exited_normal_world(const void *arg)
{
unsigned int run_pri;
pe_exc_data_t *pe_data = this_cpu_data();
/* If the running priority is in the secure range, do nothing */
run_pri = plat_ic_get_running_priority();
if (IS_PRI_SECURE(run_pri))
return 0;
/* Do nothing if there are explicit activations */
if (has_valid_pri_activations(pe_data))
return 0;
assert(pe_data->ns_pri_mask == 0);
pe_data->ns_pri_mask =
plat_ic_set_priority_mask(GIC_HIGHEST_NS_PRIORITY);
/* The previous Priority Mask is not expected to be in secure range */
if (IS_PRI_SECURE(pe_data->ns_pri_mask)) {
ERROR("Priority Mask (0x%x) already in secure range\n",
pe_data->ns_pri_mask);
panic();
}
EHF_LOG("Priority Mask: 0x%x => 0x%x\n", pe_data->ns_pri_mask,
GIC_HIGHEST_NS_PRIORITY);
return 0;
}
/*
* Conclude Secure execution and prepare for return to Non-secure world. Restore
* the Non-secure Priority Mask previously stashed upon leaving Non-secure
* world.
*
* If there the current running priority is in the secure range, or if there are
* outstanding priority activations, this function does nothing.
*
* This function subscribes to the 'cm_entering_normal_world' event published by
* the Context Management Library.
*/
static void *ehf_entering_normal_world(const void *arg)
{
unsigned int old_pmr, run_pri;
pe_exc_data_t *pe_data = this_cpu_data();
/* If the running priority is in the secure range, do nothing */
run_pri = plat_ic_get_running_priority();
if (IS_PRI_SECURE(run_pri))
return 0;
/*
* If there are explicit activations, do nothing. The Priority Mask will
* be restored upon the last deactivation.
*/
if (has_valid_pri_activations(pe_data))
return 0;
/* Do nothing if we don't have a valid Priority Mask to restore */
if (pe_data->ns_pri_mask == 0)
return 0;
old_pmr = plat_ic_set_priority_mask(pe_data->ns_pri_mask);
/*
* When exiting secure world, the current Priority Mask must be
* GIC_HIGHEST_NS_PRIORITY (as set during entry), or the Non-secure
* priority mask set upon calling ehf_allow_ns_preemption()
*/
if ((old_pmr != GIC_HIGHEST_NS_PRIORITY) &&
(old_pmr != pe_data->ns_pri_mask)) {
ERROR("Invalid Priority Mask (0x%x) restored\n", old_pmr);
panic();
}
EHF_LOG("Priority Mask: 0x%x => 0x%x\n", old_pmr, pe_data->ns_pri_mask);
pe_data->ns_pri_mask = 0;
return 0;
}
/*
* Program Priority Mask to the original Non-secure priority such that
* Non-secure interrupts may preempt Secure execution, viz. during Yielding SMC
* calls. The 'preempt_ret_code' parameter indicates the Yielding SMC's return
* value in case the call was preempted.
*
* This API is expected to be invoked before delegating a yielding SMC to Secure
* EL1. I.e. within the window of secure execution after Non-secure context is
* saved (after entry into EL3) and Secure context is restored (before entering
* Secure EL1).
*/
void ehf_allow_ns_preemption(uint64_t preempt_ret_code)
{
cpu_context_t *ns_ctx;
unsigned int old_pmr __unused;
pe_exc_data_t *pe_data = this_cpu_data();
/*
* We should have been notified earlier of entering secure world, and
* therefore have stashed the Non-secure priority mask.
*/
assert(pe_data->ns_pri_mask != 0);
/* Make sure no priority levels are active when requesting this */
if (has_valid_pri_activations(pe_data)) {
ERROR("PE %lx has priority activations: 0x%x\n",
read_mpidr_el1(), pe_data->active_pri_bits);
panic();
}
/*
* Program preempted return code to x0 right away so that, if the
* Yielding SMC was indeed preempted before a dispatcher gets a chance
* to populate it, the caller would find the correct return value.
*/
ns_ctx = cm_get_context(NON_SECURE);
assert(ns_ctx);
write_ctx_reg(get_gpregs_ctx(ns_ctx), CTX_GPREG_X0, preempt_ret_code);
old_pmr = plat_ic_set_priority_mask(pe_data->ns_pri_mask);
EHF_LOG("Priority Mask: 0x%x => 0x%x\n", old_pmr, pe_data->ns_pri_mask);
pe_data->ns_pri_mask = 0;
}
/*
* Return whether Secure execution has explicitly allowed Non-secure interrupts
* to preempt itself, viz. during Yielding SMC calls.
*/
unsigned int ehf_is_ns_preemption_allowed(void)
{
unsigned int run_pri;
pe_exc_data_t *pe_data = this_cpu_data();
/* If running priority is in secure range, return false */
run_pri = plat_ic_get_running_priority();
if (IS_PRI_SECURE(run_pri))
return 0;
/*
* If Non-secure preemption was permitted by calling
* ehf_allow_ns_preemption() earlier:
*
* - There wouldn't have been priority activations;
* - We would have cleared the stashed the Non-secure Priority Mask.
*/
if (has_valid_pri_activations(pe_data))
return 0;
if (pe_data->ns_pri_mask != 0)
return 0;
return 1;
}
/*
* Top-level EL3 interrupt handler.
*/
static uint64_t ehf_el3_interrupt_handler(uint32_t id, uint32_t flags,
void *handle, void *cookie)
{
int pri, idx, intr, intr_raw, ret = 0;
ehf_handler_t handler;
/*
* Top-level interrupt type handler from Interrupt Management Framework
* doesn't acknowledge the interrupt; so the interrupt ID must be
* invalid.
*/
assert(id == INTR_ID_UNAVAILABLE);
/*
* Acknowledge interrupt. Proceed with handling only for valid interrupt
* IDs. This situation may arise because of Interrupt Management
* Framework identifying an EL3 interrupt, but before it's been
* acknowledged here, the interrupt was either deasserted, or there was
* a higher-priority interrupt of another type.
*/
intr_raw = plat_ic_acknowledge_interrupt();
intr = plat_ic_get_interrupt_id(intr_raw);
if (intr == INTR_ID_UNAVAILABLE)
return 0;
/* Having acknowledged the interrupt, get the running priority */
pri = plat_ic_get_running_priority();
/* Check EL3 interrupt priority is in secure range */
assert(IS_PRI_SECURE(pri));
/*
* Translate the priority to a descriptor index. We do this by masking
* and shifting the running priority value (platform-supplied).
*/
idx = pri_to_idx(pri);
/* Validate priority */
assert(pri == IDX_TO_PRI(idx));
handler = RAW_HANDLER(exception_data.ehf_priorities[idx].ehf_handler);
if (!handler) {
ERROR("No EL3 exception handler for priority 0x%x\n",
IDX_TO_PRI(idx));
panic();
}
/*
* Call registered handler. Pass the raw interrupt value to registered
* handlers.
*/
ret = handler(intr_raw, flags, handle, cookie);
return ret;
}
/*
* Initialize the EL3 exception handling.
*/
void ehf_init(void)
{
unsigned int flags = 0;
int ret __unused;
/* Ensure EL3 interrupts are supported */
assert(plat_ic_has_interrupt_type(INTR_TYPE_EL3));
/*
* Make sure that priority water mark has enough bits to represent the
* whole priority array.
*/
assert(exception_data.num_priorities <= (sizeof(ehf_pri_bits_t) * 8));
assert(exception_data.ehf_priorities);
/*
* Bit 7 of GIC priority must be 0 for secure interrupts. This means
* platforms must use at least 1 of the remaining 7 bits.
*/
assert((exception_data.pri_bits >= 1) || (exception_data.pri_bits < 8));
/* Route EL3 interrupts when in Secure and Non-secure. */
set_interrupt_rm_flag(flags, NON_SECURE);
set_interrupt_rm_flag(flags, SECURE);
/* Register handler for EL3 interrupts */
ret = register_interrupt_type_handler(INTR_TYPE_EL3,
ehf_el3_interrupt_handler, flags);
assert(ret == 0);
}
/*
* Register a handler at the supplied priority. Registration is allowed only if
* a handler hasn't been registered before, or one wasn't provided at build
* time. The priority for which the handler is being registered must also accord
* with the platform-supplied data.
*/
void ehf_register_priority_handler(unsigned int pri, ehf_handler_t handler)
{
int idx;
/* Sanity check for handler */
assert(handler != NULL);
/* Handler ought to be 4-byte aligned */
assert((((uintptr_t) handler) & 3) == 0);
/* Ensure we register for valid priority */
idx = pri_to_idx(pri);
assert(idx < exception_data.num_priorities);
assert(IDX_TO_PRI(idx) == pri);
/* Return failure if a handler was already registered */
if (exception_data.ehf_priorities[idx].ehf_handler != _EHF_NO_HANDLER) {
ERROR("Handler already registered for priority 0x%x\n", pri);
panic();
}
/*
* Install handler, and retain the valid bit. We assume that the handler
* is 4-byte aligned, which is usually the case.
*/
exception_data.ehf_priorities[idx].ehf_handler =
(((uintptr_t) handler) | _EHF_PRI_VALID);
EHF_LOG("register pri=0x%x handler=%p\n", pri, handler);
}
SUBSCRIBE_TO_EVENT(cm_entering_normal_world, ehf_entering_normal_world);
SUBSCRIBE_TO_EVENT(cm_exited_normal_world, ehf_exited_normal_world);

View File

@@ -0,0 +1,226 @@
/*
* Copyright (c) 2014, ARM Limited and Contributors. All rights reserved.
*
* SPDX-License-Identifier: BSD-3-Clause
*/
#include <assert.h>
#include <bl_common.h>
#include <context_mgmt.h>
#include <errno.h>
#include <interrupt_mgmt.h>
#include <platform.h>
#include <stdio.h>
/*******************************************************************************
* Local structure and corresponding array to keep track of the state of the
* registered interrupt handlers for each interrupt type.
* The field descriptions are:
*
* 'flags' : Bit[0], Routing model for this interrupt type when execution is
* not in EL3 in the secure state. '1' implies that this
* interrupt will be routed to EL3. '0' implies that this
* interrupt will be routed to the current exception level.
*
* Bit[1], Routing model for this interrupt type when execution is
* not in EL3 in the non-secure state. '1' implies that this
* interrupt will be routed to EL3. '0' implies that this
* interrupt will be routed to the current exception level.
*
* All other bits are reserved and SBZ.
*
* 'scr_el3[2]' : Mapping of the routing model in the 'flags' field to the
* value of the SCR_EL3.IRQ or FIQ bit for each security state.
* There are two instances of this field corresponding to the
* two security states.
******************************************************************************/
typedef struct intr_type_desc {
interrupt_type_handler_t handler;
uint32_t flags;
uint32_t scr_el3[2];
} intr_type_desc_t;
static intr_type_desc_t intr_type_descs[MAX_INTR_TYPES];
/*******************************************************************************
* This function validates the interrupt type.
******************************************************************************/
static int32_t validate_interrupt_type(uint32_t type)
{
if (type == INTR_TYPE_S_EL1 || type == INTR_TYPE_NS ||
type == INTR_TYPE_EL3)
return 0;
return -EINVAL;
}
/*******************************************************************************
* This function validates the routing model for this type of interrupt
******************************************************************************/
static int32_t validate_routing_model(uint32_t type, uint32_t flags)
{
flags >>= INTR_RM_FLAGS_SHIFT;
flags &= INTR_RM_FLAGS_MASK;
if (type == INTR_TYPE_S_EL1)
return validate_sel1_interrupt_rm(flags);
if (type == INTR_TYPE_NS)
return validate_ns_interrupt_rm(flags);
if (type == INTR_TYPE_EL3)
return validate_el3_interrupt_rm(flags);
return -EINVAL;
}
/*******************************************************************************
* This function returns the cached copy of the SCR_EL3 which contains the
* routing model (expressed through the IRQ and FIQ bits) for a security state
* which was stored through a call to 'set_routing_model()' earlier.
******************************************************************************/
uint32_t get_scr_el3_from_routing_model(uint32_t security_state)
{
uint32_t scr_el3;
assert(sec_state_is_valid(security_state));
scr_el3 = intr_type_descs[INTR_TYPE_NS].scr_el3[security_state];
scr_el3 |= intr_type_descs[INTR_TYPE_S_EL1].scr_el3[security_state];
scr_el3 |= intr_type_descs[INTR_TYPE_EL3].scr_el3[security_state];
return scr_el3;
}
/*******************************************************************************
* This function uses the 'interrupt_type_flags' parameter to obtain the value
* of the trap bit (IRQ/FIQ) in the SCR_EL3 for a security state for this
* interrupt type. It uses it to update the SCR_EL3 in the cpu context and the
* 'intr_type_desc' for that security state.
******************************************************************************/
static void set_scr_el3_from_rm(uint32_t type,
uint32_t interrupt_type_flags,
uint32_t security_state)
{
uint32_t flag, bit_pos;
flag = get_interrupt_rm_flag(interrupt_type_flags, security_state);
bit_pos = plat_interrupt_type_to_line(type, security_state);
intr_type_descs[type].scr_el3[security_state] = flag << bit_pos;
/* Update scr_el3 only if there is a context available. If not, it
* will be updated later during context initialization which will obtain
* the scr_el3 value to be used via get_scr_el3_from_routing_model() */
if (cm_get_context(security_state))
cm_write_scr_el3_bit(security_state, bit_pos, flag);
}
/*******************************************************************************
* This function validates the routing model specified in the 'flags' and
* updates internal data structures to reflect the new routing model. It also
* updates the copy of SCR_EL3 for each security state with the new routing
* model in the 'cpu_context' structure for this cpu.
******************************************************************************/
int32_t set_routing_model(uint32_t type, uint32_t flags)
{
int32_t rc;
rc = validate_interrupt_type(type);
if (rc)
return rc;
rc = validate_routing_model(type, flags);
if (rc)
return rc;
/* Update the routing model in internal data structures */
intr_type_descs[type].flags = flags;
set_scr_el3_from_rm(type, flags, SECURE);
set_scr_el3_from_rm(type, flags, NON_SECURE);
return 0;
}
/******************************************************************************
* This function disables the routing model of interrupt 'type' from the
* specified 'security_state' on the local core. The disable is in effect
* till the core powers down or till the next enable for that interrupt
* type.
*****************************************************************************/
int disable_intr_rm_local(uint32_t type, uint32_t security_state)
{
uint32_t bit_pos, flag;
assert(intr_type_descs[type].handler);
flag = get_interrupt_rm_flag(INTR_DEFAULT_RM, security_state);
bit_pos = plat_interrupt_type_to_line(type, security_state);
cm_write_scr_el3_bit(security_state, bit_pos, flag);
return 0;
}
/******************************************************************************
* This function enables the routing model of interrupt 'type' from the
* specified 'security_state' on the local core.
*****************************************************************************/
int enable_intr_rm_local(uint32_t type, uint32_t security_state)
{
uint32_t bit_pos, flag;
assert(intr_type_descs[type].handler);
flag = get_interrupt_rm_flag(intr_type_descs[type].flags,
security_state);
bit_pos = plat_interrupt_type_to_line(type, security_state);
cm_write_scr_el3_bit(security_state, bit_pos, flag);
return 0;
}
/*******************************************************************************
* This function registers a handler for the 'type' of interrupt specified. It
* also validates the routing model specified in the 'flags' for this type of
* interrupt.
******************************************************************************/
int32_t register_interrupt_type_handler(uint32_t type,
interrupt_type_handler_t handler,
uint32_t flags)
{
int32_t rc;
/* Validate the 'handler' parameter */
if (!handler)
return -EINVAL;
/* Validate the 'flags' parameter */
if (flags & INTR_TYPE_FLAGS_MASK)
return -EINVAL;
/* Check if a handler has already been registered */
if (intr_type_descs[type].handler)
return -EALREADY;
rc = set_routing_model(type, flags);
if (rc)
return rc;
/* Save the handler */
intr_type_descs[type].handler = handler;
return 0;
}
/*******************************************************************************
* This function is called when an interrupt is generated and returns the
* handler for the interrupt type (if registered). It returns NULL if the
* interrupt type is not supported or its handler has not been registered.
******************************************************************************/
interrupt_type_handler_t get_interrupt_type_handler(uint32_t type)
{
if (validate_interrupt_type(type))
return NULL;
return intr_type_descs[type].handler;
}

View File

@@ -0,0 +1,15 @@
#
# Copyright (c) 2016-2018, ARM Limited and Contributors. All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
#
# This makefile only aims at complying with ARM Trusted Firmware build process so
# that "optee" is a valid ARM Trusted Firmware AArch32 Secure Playload identifier.
ifneq ($(ARCH),aarch32)
$(error This directory targets AArch32 support)
endif
$(eval $(call add_define,AARCH32_SP_OPTEE))
$(info ARM Trusted Firmware built for OP-TEE payload support)

View File

@@ -0,0 +1,330 @@
/*
* Copyright (c) 2016-2018, ARM Limited and Contributors. All rights reserved.
*
* SPDX-License-Identifier: BSD-3-Clause
*/
#include <arch.h>
#include <asm_macros.S>
#include <bl_common.h>
#include <context.h>
#include <el3_common_macros.S>
#include <runtime_svc.h>
#include <smccc_helpers.h>
#include <smccc_macros.S>
#include <xlat_tables_defs.h>
.globl sp_min_vector_table
.globl sp_min_entrypoint
.globl sp_min_warm_entrypoint
.globl sp_min_handle_smc
.globl sp_min_handle_fiq
.macro route_fiq_to_sp_min reg
/* -----------------------------------------------------
* FIQs are secure interrupts trapped by Monitor and non
* secure is not allowed to mask the FIQs.
* -----------------------------------------------------
*/
ldcopr \reg, SCR
orr \reg, \reg, #SCR_FIQ_BIT
bic \reg, \reg, #SCR_FW_BIT
stcopr \reg, SCR
.endm
.macro clrex_on_monitor_entry
#if (ARM_ARCH_MAJOR == 7)
/*
* ARMv7 architectures need to clear the exclusive access when
* entering Monitor mode.
*/
clrex
#endif
.endm
vector_base sp_min_vector_table
b sp_min_entrypoint
b plat_panic_handler /* Undef */
b sp_min_handle_smc /* Syscall */
b plat_panic_handler /* Prefetch abort */
b plat_panic_handler /* Data abort */
b plat_panic_handler /* Reserved */
b plat_panic_handler /* IRQ */
b sp_min_handle_fiq /* FIQ */
/*
* The Cold boot/Reset entrypoint for SP_MIN
*/
func sp_min_entrypoint
#if !RESET_TO_SP_MIN
/* ---------------------------------------------------------------
* Preceding bootloader has populated r0 with a pointer to a
* 'bl_params_t' structure & r1 with a pointer to platform
* specific structure
* ---------------------------------------------------------------
*/
mov r9, r0
mov r10, r1
mov r11, r2
mov r12, r3
/* ---------------------------------------------------------------------
* For !RESET_TO_SP_MIN systems, only the primary CPU ever reaches
* sp_min_entrypoint() during the cold boot flow, so the cold/warm boot
* and primary/secondary CPU logic should not be executed in this case.
*
* Also, assume that the previous bootloader has already initialised the
* SCTLR, including the CPU endianness, and has initialised the memory.
* ---------------------------------------------------------------------
*/
el3_entrypoint_common \
_init_sctlr=0 \
_warm_boot_mailbox=0 \
_secondary_cold_boot=0 \
_init_memory=0 \
_init_c_runtime=1 \
_exception_vectors=sp_min_vector_table
/* ---------------------------------------------------------------------
* Relay the previous bootloader's arguments to the platform layer
* ---------------------------------------------------------------------
*/
#else
/* ---------------------------------------------------------------------
* For RESET_TO_SP_MIN systems which have a programmable reset address,
* sp_min_entrypoint() is executed only on the cold boot path so we can
* skip the warm boot mailbox mechanism.
* ---------------------------------------------------------------------
*/
el3_entrypoint_common \
_init_sctlr=1 \
_warm_boot_mailbox=!PROGRAMMABLE_RESET_ADDRESS \
_secondary_cold_boot=!COLD_BOOT_SINGLE_CPU \
_init_memory=1 \
_init_c_runtime=1 \
_exception_vectors=sp_min_vector_table
/* ---------------------------------------------------------------------
* For RESET_TO_SP_MIN systems, BL32 (SP_MIN) is the first bootloader
* to run so there's no argument to relay from a previous bootloader.
* Zero the arguments passed to the platform layer to reflect that.
* ---------------------------------------------------------------------
*/
mov r9, #0
mov r10, #0
mov r11, #0
mov r12, #0
#endif /* RESET_TO_SP_MIN */
#if SP_MIN_WITH_SECURE_FIQ
route_fiq_to_sp_min r4
#endif
mov r0, r9
mov r1, r10
mov r2, r11
mov r3, r12
bl sp_min_early_platform_setup2
bl sp_min_plat_arch_setup
/* Jump to the main function */
bl sp_min_main
/* -------------------------------------------------------------
* Clean the .data & .bss sections to main memory. This ensures
* that any global data which was initialised by the primary CPU
* is visible to secondary CPUs before they enable their data
* caches and participate in coherency.
* -------------------------------------------------------------
*/
ldr r0, =__DATA_START__
ldr r1, =__DATA_END__
sub r1, r1, r0
bl clean_dcache_range
ldr r0, =__BSS_START__
ldr r1, =__BSS_END__
sub r1, r1, r0
bl clean_dcache_range
bl smc_get_next_ctx
/* r0 points to `smc_ctx_t` */
/* The PSCI cpu_context registers have been copied to `smc_ctx_t` */
b sp_min_exit
endfunc sp_min_entrypoint
/*
* SMC handling function for SP_MIN.
*/
func sp_min_handle_smc
/* On SMC entry, `sp` points to `smc_ctx_t`. Save `lr`. */
str lr, [sp, #SMC_CTX_LR_MON]
smccc_save_gp_mode_regs
clrex_on_monitor_entry
/*
* `sp` still points to `smc_ctx_t`. Save it to a register
* and restore the C runtime stack pointer to `sp`.
*/
mov r2, sp /* handle */
ldr sp, [r2, #SMC_CTX_SP_MON]
ldr r0, [r2, #SMC_CTX_SCR]
and r3, r0, #SCR_NS_BIT /* flags */
/* Switch to Secure Mode*/
bic r0, #SCR_NS_BIT
stcopr r0, SCR
isb
/*
* Set PMCR.DP to 1 to prohibit cycle counting whilst in Secure Mode.
* Also, the PMCR.LC field has an architecturally UNKNOWN value on reset
* and so set to 1 as ARM has deprecated use of PMCR.LC=0.
*/
ldcopr r0, PMCR
orr r0, r0, #(PMCR_LC_BIT | PMCR_DP_BIT)
stcopr r0, PMCR
ldr r0, [r2, #SMC_CTX_GPREG_R0] /* smc_fid */
/* Check whether an SMC64 is issued */
tst r0, #(FUNCID_CC_MASK << FUNCID_CC_SHIFT)
beq 1f
/* SMC32 is not detected. Return error back to caller */
mov r0, #SMC_UNK
str r0, [r2, #SMC_CTX_GPREG_R0]
mov r0, r2
b sp_min_exit
1:
/* SMC32 is detected */
mov r1, #0 /* cookie */
bl handle_runtime_svc
/* `r0` points to `smc_ctx_t` */
b sp_min_exit
endfunc sp_min_handle_smc
/*
* Secure Interrupts handling function for SP_MIN.
*/
func sp_min_handle_fiq
#if !SP_MIN_WITH_SECURE_FIQ
b plat_panic_handler
#else
/* FIQ has a +4 offset for lr compared to preferred return address */
sub lr, lr, #4
/* On SMC entry, `sp` points to `smc_ctx_t`. Save `lr`. */
str lr, [sp, #SMC_CTX_LR_MON]
smccc_save_gp_mode_regs
clrex_on_monitor_entry
/* load run-time stack */
mov r2, sp
ldr sp, [r2, #SMC_CTX_SP_MON]
/* Switch to Secure Mode */
ldr r0, [r2, #SMC_CTX_SCR]
bic r0, #SCR_NS_BIT
stcopr r0, SCR
isb
/*
* Set PMCR.DP to 1 to prohibit cycle counting whilst in Secure Mode.
* Also, the PMCR.LC field has an architecturally UNKNOWN value on reset
* and so set to 1 as ARM has deprecated use of PMCR.LC=0.
*/
ldcopr r0, PMCR
orr r0, r0, #(PMCR_LC_BIT | PMCR_DP_BIT)
stcopr r0, PMCR
push {r2, r3}
bl sp_min_fiq
pop {r0, r3}
b sp_min_exit
#endif
endfunc sp_min_handle_fiq
/*
* The Warm boot entrypoint for SP_MIN.
*/
func sp_min_warm_entrypoint
/*
* On the warm boot path, most of the EL3 initialisations performed by
* 'el3_entrypoint_common' must be skipped:
*
* - Only when the platform bypasses the BL1/BL32 (SP_MIN) entrypoint by
* programming the reset address do we need to initialied the SCTLR.
* In other cases, we assume this has been taken care by the
* entrypoint code.
*
* - No need to determine the type of boot, we know it is a warm boot.
*
* - Do not try to distinguish between primary and secondary CPUs, this
* notion only exists for a cold boot.
*
* - No need to initialise the memory or the C runtime environment,
* it has been done once and for all on the cold boot path.
*/
el3_entrypoint_common \
_init_sctlr=PROGRAMMABLE_RESET_ADDRESS \
_warm_boot_mailbox=0 \
_secondary_cold_boot=0 \
_init_memory=0 \
_init_c_runtime=0 \
_exception_vectors=sp_min_vector_table
/*
* We're about to enable MMU and participate in PSCI state coordination.
*
* The PSCI implementation invokes platform routines that enable CPUs to
* participate in coherency. On a system where CPUs are not
* cache-coherent without appropriate platform specific programming,
* having caches enabled until such time might lead to coherency issues
* (resulting from stale data getting speculatively fetched, among
* others). Therefore we keep data caches disabled even after enabling
* the MMU for such platforms.
*
* On systems with hardware-assisted coherency, or on single cluster
* platforms, such platform specific programming is not required to
* enter coherency (as CPUs already are); and there's no reason to have
* caches disabled either.
*/
mov r0, #DISABLE_DCACHE
bl bl32_plat_enable_mmu
#if SP_MIN_WITH_SECURE_FIQ
route_fiq_to_sp_min r0
#endif
#if HW_ASSISTED_COHERENCY || WARMBOOT_ENABLE_DCACHE_EARLY
ldcopr r0, SCTLR
orr r0, r0, #SCTLR_C_BIT
stcopr r0, SCTLR
isb
#endif
bl sp_min_warm_boot
bl smc_get_next_ctx
/* r0 points to `smc_ctx_t` */
/* The PSCI cpu_context registers have been copied to `smc_ctx_t` */
b sp_min_exit
endfunc sp_min_warm_entrypoint
/*
* The function to restore the registers from SMC context and return
* to the mode restored to SPSR.
*
* Arguments : r0 must point to the SMC context to restore from.
*/
func sp_min_exit
monitor_exit
endfunc sp_min_exit

View File

@@ -0,0 +1,225 @@
/*
* Copyright (c) 2016-2018, ARM Limited and Contributors. All rights reserved.
*
* SPDX-License-Identifier: BSD-3-Clause
*/
#include <platform_def.h>
#include <xlat_tables_defs.h>
OUTPUT_FORMAT(elf32-littlearm)
OUTPUT_ARCH(arm)
ENTRY(sp_min_vector_table)
MEMORY {
RAM (rwx): ORIGIN = BL32_BASE, LENGTH = BL32_LIMIT - BL32_BASE
}
SECTIONS
{
. = BL32_BASE;
ASSERT(. == ALIGN(PAGE_SIZE),
"BL32_BASE address is not aligned on a page boundary.")
#if SEPARATE_CODE_AND_RODATA
.text . : {
__TEXT_START__ = .;
*entrypoint.o(.text*)
*(.text*)
*(.vectors)
. = NEXT(PAGE_SIZE);
__TEXT_END__ = .;
} >RAM
.rodata . : {
__RODATA_START__ = .;
*(.rodata*)
/* Ensure 4-byte alignment for descriptors and ensure inclusion */
. = ALIGN(4);
__RT_SVC_DESCS_START__ = .;
KEEP(*(rt_svc_descs))
__RT_SVC_DESCS_END__ = .;
/*
* Ensure 4-byte alignment for cpu_ops so that its fields are also
* aligned. Also ensure cpu_ops inclusion.
*/
. = ALIGN(4);
__CPU_OPS_START__ = .;
KEEP(*(cpu_ops))
__CPU_OPS_END__ = .;
/* Place pubsub sections for events */
. = ALIGN(8);
#include <pubsub_events.h>
. = NEXT(PAGE_SIZE);
__RODATA_END__ = .;
} >RAM
#else
ro . : {
__RO_START__ = .;
*entrypoint.o(.text*)
*(.text*)
*(.rodata*)
/* Ensure 4-byte alignment for descriptors and ensure inclusion */
. = ALIGN(4);
__RT_SVC_DESCS_START__ = .;
KEEP(*(rt_svc_descs))
__RT_SVC_DESCS_END__ = .;
/*
* Ensure 4-byte alignment for cpu_ops so that its fields are also
* aligned. Also ensure cpu_ops inclusion.
*/
. = ALIGN(4);
__CPU_OPS_START__ = .;
KEEP(*(cpu_ops))
__CPU_OPS_END__ = .;
/* Place pubsub sections for events */
. = ALIGN(8);
#include <pubsub_events.h>
*(.vectors)
__RO_END_UNALIGNED__ = .;
/*
* Memory page(s) mapped to this section will be marked as
* read-only, executable. No RW data from the next section must
* creep in. Ensure the rest of the current memory block is unused.
*/
. = NEXT(PAGE_SIZE);
__RO_END__ = .;
} >RAM
#endif
ASSERT(__CPU_OPS_END__ > __CPU_OPS_START__,
"cpu_ops not defined for this platform.")
/*
* Define a linker symbol to mark start of the RW memory area for this
* image.
*/
__RW_START__ = . ;
.data . : {
__DATA_START__ = .;
*(.data*)
__DATA_END__ = .;
} >RAM
#ifdef BL32_PROGBITS_LIMIT
ASSERT(. <= BL32_PROGBITS_LIMIT, "BL32 progbits has exceeded its limit.")
#endif
stacks (NOLOAD) : {
__STACKS_START__ = .;
*(tzfw_normal_stacks)
__STACKS_END__ = .;
} >RAM
/*
* The .bss section gets initialised to 0 at runtime.
* Its base address should be 8-byte aligned for better performance of the
* zero-initialization code.
*/
.bss (NOLOAD) : ALIGN(8) {
__BSS_START__ = .;
*(.bss*)
*(COMMON)
#if !USE_COHERENT_MEM
/*
* Bakery locks are stored in normal .bss memory
*
* Each lock's data is spread across multiple cache lines, one per CPU,
* but multiple locks can share the same cache line.
* The compiler will allocate enough memory for one CPU's bakery locks,
* the remaining cache lines are allocated by the linker script
*/
. = ALIGN(CACHE_WRITEBACK_GRANULE);
__BAKERY_LOCK_START__ = .;
*(bakery_lock)
. = ALIGN(CACHE_WRITEBACK_GRANULE);
__PERCPU_BAKERY_LOCK_SIZE__ = ABSOLUTE(. - __BAKERY_LOCK_START__);
. = . + (__PERCPU_BAKERY_LOCK_SIZE__ * (PLATFORM_CORE_COUNT - 1));
__BAKERY_LOCK_END__ = .;
#ifdef PLAT_PERCPU_BAKERY_LOCK_SIZE
ASSERT(__PERCPU_BAKERY_LOCK_SIZE__ == PLAT_PERCPU_BAKERY_LOCK_SIZE,
"PLAT_PERCPU_BAKERY_LOCK_SIZE does not match bakery lock requirements");
#endif
#endif
#if ENABLE_PMF
/*
* Time-stamps are stored in normal .bss memory
*
* The compiler will allocate enough memory for one CPU's time-stamps,
* the remaining memory for other CPU's is allocated by the
* linker script
*/
. = ALIGN(CACHE_WRITEBACK_GRANULE);
__PMF_TIMESTAMP_START__ = .;
KEEP(*(pmf_timestamp_array))
. = ALIGN(CACHE_WRITEBACK_GRANULE);
__PMF_PERCPU_TIMESTAMP_END__ = .;
__PERCPU_TIMESTAMP_SIZE__ = ABSOLUTE(. - __PMF_TIMESTAMP_START__);
. = . + (__PERCPU_TIMESTAMP_SIZE__ * (PLATFORM_CORE_COUNT - 1));
__PMF_TIMESTAMP_END__ = .;
#endif /* ENABLE_PMF */
__BSS_END__ = .;
} >RAM
/*
* The xlat_table section is for full, aligned page tables (4K).
* Removing them from .bss avoids forcing 4K alignment on
* the .bss section. The tables are initialized to zero by the translation
* tables library.
*/
xlat_table (NOLOAD) : {
*(xlat_table)
} >RAM
__BSS_SIZE__ = SIZEOF(.bss);
#if USE_COHERENT_MEM
/*
* The base address of the coherent memory section must be page-aligned (4K)
* to guarantee that the coherent data are stored on their own pages and
* are not mixed with normal data. This is required to set up the correct
* memory attributes for the coherent data page tables.
*/
coherent_ram (NOLOAD) : ALIGN(PAGE_SIZE) {
__COHERENT_RAM_START__ = .;
/*
* Bakery locks are stored in coherent memory
*
* Each lock's data is contiguous and fully allocated by the compiler
*/
*(bakery_lock)
*(tzfw_coherent_mem)
__COHERENT_RAM_END_UNALIGNED__ = .;
/*
* Memory page(s) mapped to this section will be marked
* as device memory. No other unexpected data must creep in.
* Ensure the rest of the current memory page is unused.
*/
. = NEXT(PAGE_SIZE);
__COHERENT_RAM_END__ = .;
} >RAM
__COHERENT_RAM_UNALIGNED_SIZE__ =
__COHERENT_RAM_END_UNALIGNED__ - __COHERENT_RAM_START__;
#endif
/*
* Define a linker symbol to mark end of the RW memory area for this
* image.
*/
__RW_END__ = .;
__BL32_END__ = .;
}

View File

@@ -0,0 +1,56 @@
#
# Copyright (c) 2016-2018, ARM Limited and Contributors. All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
#
ifneq (${ARCH}, aarch32)
$(error SP_MIN is only supported on AArch32 platforms)
endif
include lib/psci/psci_lib.mk
INCLUDES += -Iinclude/bl32/sp_min
BL32_SOURCES += bl32/sp_min/sp_min_main.c \
bl32/sp_min/aarch32/entrypoint.S \
common/runtime_svc.c \
plat/common/aarch32/plat_sp_min_common.c\
services/std_svc/std_svc_setup.c \
${PSCI_LIB_SOURCES}
ifeq (${ENABLE_PMF}, 1)
BL32_SOURCES += lib/pmf/pmf_main.c
endif
ifeq (${ENABLE_AMU}, 1)
BL32_SOURCES += lib/extensions/amu/aarch32/amu.c\
lib/extensions/amu/aarch32/amu_helpers.S
endif
ifeq (${WORKAROUND_CVE_2017_5715},1)
BL32_SOURCES += bl32/sp_min/wa_cve_2017_5715_bpiall.S \
bl32/sp_min/wa_cve_2017_5715_icache_inv.S
endif
BL32_LINKERFILE := bl32/sp_min/sp_min.ld.S
# Include the platform-specific SP_MIN Makefile
# If no platform-specific SP_MIN Makefile exists, it means SP_MIN is not supported
# on this platform.
SP_MIN_PLAT_MAKEFILE := $(wildcard ${PLAT_DIR}/sp_min/sp_min-${PLAT}.mk)
ifeq (,${SP_MIN_PLAT_MAKEFILE})
$(error SP_MIN is not supported on platform ${PLAT})
else
include ${SP_MIN_PLAT_MAKEFILE}
endif
RESET_TO_SP_MIN := 0
$(eval $(call add_define,RESET_TO_SP_MIN))
$(eval $(call assert_boolean,RESET_TO_SP_MIN))
# Flag to allow SP_MIN to handle FIQ interrupts in monitor mode. The platform
# port is free to override this value. It is default disabled.
SP_MIN_WITH_SECURE_FIQ ?= 0
$(eval $(call add_define,SP_MIN_WITH_SECURE_FIQ))
$(eval $(call assert_boolean,SP_MIN_WITH_SECURE_FIQ))

View File

@@ -0,0 +1,237 @@
/*
* Copyright (c) 2016-2018, ARM Limited and Contributors. All rights reserved.
*
* SPDX-License-Identifier: BSD-3-Clause
*/
#include <arch.h>
#include <arch_helpers.h>
#include <assert.h>
#include <bl_common.h>
#include <console.h>
#include <context.h>
#include <context_mgmt.h>
#include <debug.h>
#include <platform.h>
#include <platform_def.h>
#include <platform_sp_min.h>
#include <psci.h>
#include <runtime_svc.h>
#include <smccc_helpers.h>
#include <stddef.h>
#include <stdint.h>
#include <string.h>
#include <types.h>
#include <utils.h>
#include "sp_min_private.h"
/* Pointers to per-core cpu contexts */
static void *sp_min_cpu_ctx_ptr[PLATFORM_CORE_COUNT];
/* SP_MIN only stores the non secure smc context */
static smc_ctx_t sp_min_smc_context[PLATFORM_CORE_COUNT];
/******************************************************************************
* Define the smccc helper library API's
*****************************************************************************/
void *smc_get_ctx(unsigned int security_state)
{
assert(security_state == NON_SECURE);
return &sp_min_smc_context[plat_my_core_pos()];
}
void smc_set_next_ctx(unsigned int security_state)
{
assert(security_state == NON_SECURE);
/* SP_MIN stores only non secure smc context. Nothing to do here */
}
void *smc_get_next_ctx(void)
{
return &sp_min_smc_context[plat_my_core_pos()];
}
/*******************************************************************************
* This function returns a pointer to the most recent 'cpu_context' structure
* for the calling CPU that was set as the context for the specified security
* state. NULL is returned if no such structure has been specified.
******************************************************************************/
void *cm_get_context(uint32_t security_state)
{
assert(security_state == NON_SECURE);
return sp_min_cpu_ctx_ptr[plat_my_core_pos()];
}
/*******************************************************************************
* This function sets the pointer to the current 'cpu_context' structure for the
* specified security state for the calling CPU
******************************************************************************/
void cm_set_context(void *context, uint32_t security_state)
{
assert(security_state == NON_SECURE);
sp_min_cpu_ctx_ptr[plat_my_core_pos()] = context;
}
/*******************************************************************************
* This function returns a pointer to the most recent 'cpu_context' structure
* for the CPU identified by `cpu_idx` that was set as the context for the
* specified security state. NULL is returned if no such structure has been
* specified.
******************************************************************************/
void *cm_get_context_by_index(unsigned int cpu_idx,
unsigned int security_state)
{
assert(security_state == NON_SECURE);
return sp_min_cpu_ctx_ptr[cpu_idx];
}
/*******************************************************************************
* This function sets the pointer to the current 'cpu_context' structure for the
* specified security state for the CPU identified by CPU index.
******************************************************************************/
void cm_set_context_by_index(unsigned int cpu_idx, void *context,
unsigned int security_state)
{
assert(security_state == NON_SECURE);
sp_min_cpu_ctx_ptr[cpu_idx] = context;
}
static void copy_cpu_ctx_to_smc_stx(const regs_t *cpu_reg_ctx,
smc_ctx_t *next_smc_ctx)
{
next_smc_ctx->r0 = read_ctx_reg(cpu_reg_ctx, CTX_GPREG_R0);
next_smc_ctx->lr_mon = read_ctx_reg(cpu_reg_ctx, CTX_LR);
next_smc_ctx->spsr_mon = read_ctx_reg(cpu_reg_ctx, CTX_SPSR);
next_smc_ctx->scr = read_ctx_reg(cpu_reg_ctx, CTX_SCR);
}
/*******************************************************************************
* This function invokes the PSCI library interface to initialize the
* non secure cpu context and copies the relevant cpu context register values
* to smc context. These registers will get programmed during `smc_exit`.
******************************************************************************/
static void sp_min_prepare_next_image_entry(void)
{
entry_point_info_t *next_image_info;
cpu_context_t *ctx = cm_get_context(NON_SECURE);
u_register_t ns_sctlr;
/* Program system registers to proceed to non-secure */
next_image_info = sp_min_plat_get_bl33_ep_info();
assert(next_image_info);
assert(NON_SECURE == GET_SECURITY_STATE(next_image_info->h.attr));
INFO("SP_MIN: Preparing exit to normal world\n");
psci_prepare_next_non_secure_ctx(next_image_info);
smc_set_next_ctx(NON_SECURE);
/* Copy r0, lr and spsr from cpu context to SMC context */
copy_cpu_ctx_to_smc_stx(get_regs_ctx(cm_get_context(NON_SECURE)),
smc_get_next_ctx());
/* Temporarily set the NS bit to access NS SCTLR */
write_scr(read_scr() | SCR_NS_BIT);
isb();
ns_sctlr = read_ctx_reg(get_regs_ctx(ctx), CTX_NS_SCTLR);
write_sctlr(ns_sctlr);
isb();
write_scr(read_scr() & ~SCR_NS_BIT);
isb();
}
/******************************************************************************
* Implement the ARM Standard Service function to get arguments for a
* particular service.
*****************************************************************************/
uintptr_t get_arm_std_svc_args(unsigned int svc_mask)
{
/* Setup the arguments for PSCI Library */
DEFINE_STATIC_PSCI_LIB_ARGS_V1(psci_args, sp_min_warm_entrypoint);
/* PSCI is the only ARM Standard Service implemented */
assert(svc_mask == PSCI_FID_MASK);
return (uintptr_t)&psci_args;
}
/******************************************************************************
* The SP_MIN main function. Do the platform and PSCI Library setup. Also
* initialize the runtime service framework.
*****************************************************************************/
void sp_min_main(void)
{
NOTICE("SP_MIN: %s\n", version_string);
NOTICE("SP_MIN: %s\n", build_message);
/* Perform the SP_MIN platform setup */
sp_min_platform_setup();
/* Initialize the runtime services e.g. psci */
INFO("SP_MIN: Initializing runtime services\n");
runtime_svc_init();
/*
* We are ready to enter the next EL. Prepare entry into the image
* corresponding to the desired security state after the next ERET.
*/
sp_min_prepare_next_image_entry();
/*
* Perform any platform specific runtime setup prior to cold boot exit
* from SP_MIN.
*/
sp_min_plat_runtime_setup();
console_flush();
}
/******************************************************************************
* This function is invoked during warm boot. Invoke the PSCI library
* warm boot entry point which takes care of Architectural and platform setup/
* restore. Copy the relevant cpu_context register values to smc context which
* will get programmed during `smc_exit`.
*****************************************************************************/
void sp_min_warm_boot(void)
{
smc_ctx_t *next_smc_ctx;
cpu_context_t *ctx = cm_get_context(NON_SECURE);
u_register_t ns_sctlr;
psci_warmboot_entrypoint();
smc_set_next_ctx(NON_SECURE);
next_smc_ctx = smc_get_next_ctx();
zeromem(next_smc_ctx, sizeof(smc_ctx_t));
copy_cpu_ctx_to_smc_stx(get_regs_ctx(cm_get_context(NON_SECURE)),
next_smc_ctx);
/* Temporarily set the NS bit to access NS SCTLR */
write_scr(read_scr() | SCR_NS_BIT);
isb();
ns_sctlr = read_ctx_reg(get_regs_ctx(ctx), CTX_NS_SCTLR);
write_sctlr(ns_sctlr);
isb();
write_scr(read_scr() & ~SCR_NS_BIT);
isb();
}
#if SP_MIN_WITH_SECURE_FIQ
/******************************************************************************
* This function is invoked on secure interrupts. By construction of the
* SP_MIN, secure interrupts can only be handled when core executes in non
* secure state.
*****************************************************************************/
void sp_min_fiq(void)
{
uint32_t id;
id = plat_ic_acknowledge_interrupt();
sp_min_plat_fiq_handler(id);
plat_ic_end_of_interrupt(id);
}
#endif /* SP_MIN_WITH_SECURE_FIQ */

View File

@@ -0,0 +1,15 @@
/*
* Copyright (c) 2016, ARM Limited and Contributors. All rights reserved.
*
* SPDX-License-Identifier: BSD-3-Clause
*/
#ifndef __SP_MIN_H__
#define __SP_MIN_H__
void sp_min_warm_entrypoint(void);
void sp_min_main(void);
void sp_min_warm_boot(void);
void sp_min_fiq(void);
#endif /* __SP_MIN_H__ */

View File

@@ -0,0 +1,74 @@
/*
* Copyright (c) 2018, ARM Limited and Contributors. All rights reserved.
*
* SPDX-License-Identifier: BSD-3-Clause
*/
#include <asm_macros.S>
.globl wa_cve_2017_5715_bpiall_vbar
vector_base wa_cve_2017_5715_bpiall_vbar
/* We encode the exception entry in the bottom 3 bits of SP */
add sp, sp, #1 /* Reset: 0b111 */
add sp, sp, #1 /* Undef: 0b110 */
add sp, sp, #1 /* Syscall: 0b101 */
add sp, sp, #1 /* Prefetch abort: 0b100 */
add sp, sp, #1 /* Data abort: 0b011 */
add sp, sp, #1 /* Reserved: 0b010 */
add sp, sp, #1 /* IRQ: 0b001 */
nop /* FIQ: 0b000 */
/*
* Invalidate the branch predictor, `r0` is a dummy register
* and is unused.
*/
stcopr r0, BPIALL
isb
/*
* As we cannot use any temporary registers and cannot
* clobber SP, we can decode the exception entry using
* an unrolled binary search.
*
* Note, if this code is re-used by other secure payloads,
* the below exception entry vectors must be changed to
* the vectors specific to that secure payload.
*/
tst sp, #4
bne 1f
tst sp, #2
bne 3f
/* Expected encoding: 0x1 and 0x0 */
tst sp, #1
/* Restore original value of SP by clearing the bottom 3 bits */
bic sp, sp, #0x7
bne plat_panic_handler /* IRQ */
b sp_min_handle_fiq /* FIQ */
1:
tst sp, #2
bne 2f
/* Expected encoding: 0x4 and 0x5 */
tst sp, #1
bic sp, sp, #0x7
bne sp_min_handle_smc /* Syscall */
b plat_panic_handler /* Prefetch abort */
2:
/* Expected encoding: 0x7 and 0x6 */
tst sp, #1
bic sp, sp, #0x7
bne sp_min_entrypoint /* Reset */
b plat_panic_handler /* Undef */
3:
/* Expected encoding: 0x2 and 0x3 */
tst sp, #1
bic sp, sp, #0x7
bne plat_panic_handler /* Data abort */
b plat_panic_handler /* Reserved */

View File

@@ -0,0 +1,75 @@
/*
* Copyright (c) 2018, ARM Limited and Contributors. All rights reserved.
*
* SPDX-License-Identifier: BSD-3-Clause
*/
#include <asm_macros.S>
.globl wa_cve_2017_5715_icache_inv_vbar
vector_base wa_cve_2017_5715_icache_inv_vbar
/* We encode the exception entry in the bottom 3 bits of SP */
add sp, sp, #1 /* Reset: 0b111 */
add sp, sp, #1 /* Undef: 0b110 */
add sp, sp, #1 /* Syscall: 0b101 */
add sp, sp, #1 /* Prefetch abort: 0b100 */
add sp, sp, #1 /* Data abort: 0b011 */
add sp, sp, #1 /* Reserved: 0b010 */
add sp, sp, #1 /* IRQ: 0b001 */
nop /* FIQ: 0b000 */
/*
* Invalidate the instruction cache, which we assume also
* invalidates the branch predictor. This may depend on
* other CPU specific changes (e.g. an ACTLR setting).
*/
stcopr r0, ICIALLU
isb
/*
* As we cannot use any temporary registers and cannot
* clobber SP, we can decode the exception entry using
* an unrolled binary search.
*
* Note, if this code is re-used by other secure payloads,
* the below exception entry vectors must be changed to
* the vectors specific to that secure payload.
*/
tst sp, #4
bne 1f
tst sp, #2
bne 3f
/* Expected encoding: 0x1 and 0x0 */
tst sp, #1
/* Restore original value of SP by clearing the bottom 3 bits */
bic sp, sp, #0x7
bne plat_panic_handler /* IRQ */
b sp_min_handle_fiq /* FIQ */
1:
/* Expected encoding: 0x4 and 0x5 */
tst sp, #2
bne 2f
tst sp, #1
bic sp, sp, #0x7
bne sp_min_handle_smc /* Syscall */
b plat_panic_handler /* Prefetch abort */
2:
/* Expected encoding: 0x7 and 0x6 */
tst sp, #1
bic sp, sp, #0x7
bne sp_min_entrypoint /* Reset */
b plat_panic_handler /* Undef */
3:
/* Expected encoding: 0x2 and 0x3 */
tst sp, #1
bic sp, sp, #0x7
bne plat_panic_handler /* Data abort */
b plat_panic_handler /* Reserved */

View File

@@ -0,0 +1,453 @@
/*
* Copyright (c) 2013-2017, ARM Limited and Contributors. All rights reserved.
*
* SPDX-License-Identifier: BSD-3-Clause
*/
#include <arch.h>
#include <asm_macros.S>
#include <tsp.h>
#include <xlat_tables_defs.h>
#include "../tsp_private.h"
.globl tsp_entrypoint
.globl tsp_vector_table
/* ---------------------------------------------
* Populate the params in x0-x7 from the pointer
* to the smc args structure in x0.
* ---------------------------------------------
*/
.macro restore_args_call_smc
ldp x6, x7, [x0, #TSP_ARG6]
ldp x4, x5, [x0, #TSP_ARG4]
ldp x2, x3, [x0, #TSP_ARG2]
ldp x0, x1, [x0, #TSP_ARG0]
smc #0
.endm
.macro save_eret_context reg1 reg2
mrs \reg1, elr_el1
mrs \reg2, spsr_el1
stp \reg1, \reg2, [sp, #-0x10]!
stp x30, x18, [sp, #-0x10]!
.endm
.macro restore_eret_context reg1 reg2
ldp x30, x18, [sp], #0x10
ldp \reg1, \reg2, [sp], #0x10
msr elr_el1, \reg1
msr spsr_el1, \reg2
.endm
func tsp_entrypoint _align=3
/* ---------------------------------------------
* Set the exception vector to something sane.
* ---------------------------------------------
*/
adr x0, tsp_exceptions
msr vbar_el1, x0
isb
/* ---------------------------------------------
* Enable the SError interrupt now that the
* exception vectors have been setup.
* ---------------------------------------------
*/
msr daifclr, #DAIF_ABT_BIT
/* ---------------------------------------------
* Enable the instruction cache, stack pointer
* and data access alignment checks
* ---------------------------------------------
*/
mov x1, #(SCTLR_I_BIT | SCTLR_A_BIT | SCTLR_SA_BIT)
mrs x0, sctlr_el1
orr x0, x0, x1
msr sctlr_el1, x0
isb
/* ---------------------------------------------
* Invalidate the RW memory used by the BL32
* image. This includes the data and NOBITS
* sections. This is done to safeguard against
* possible corruption of this memory by dirty
* cache lines in a system cache as a result of
* use by an earlier boot loader stage.
* ---------------------------------------------
*/
adr x0, __RW_START__
adr x1, __RW_END__
sub x1, x1, x0
bl inv_dcache_range
/* ---------------------------------------------
* Zero out NOBITS sections. There are 2 of them:
* - the .bss section;
* - the coherent memory section.
* ---------------------------------------------
*/
ldr x0, =__BSS_START__
ldr x1, =__BSS_SIZE__
bl zeromem
#if USE_COHERENT_MEM
ldr x0, =__COHERENT_RAM_START__
ldr x1, =__COHERENT_RAM_UNALIGNED_SIZE__
bl zeromem
#endif
/* --------------------------------------------
* Allocate a stack whose memory will be marked
* as Normal-IS-WBWA when the MMU is enabled.
* There is no risk of reading stale stack
* memory after enabling the MMU as only the
* primary cpu is running at the moment.
* --------------------------------------------
*/
bl plat_set_my_stack
/* ---------------------------------------------
* Initialize the stack protector canary before
* any C code is called.
* ---------------------------------------------
*/
#if STACK_PROTECTOR_ENABLED
bl update_stack_protector_canary
#endif
/* ---------------------------------------------
* Perform early platform setup & platform
* specific early arch. setup e.g. mmu setup
* ---------------------------------------------
*/
bl tsp_early_platform_setup
bl tsp_plat_arch_setup
/* ---------------------------------------------
* Jump to main function.
* ---------------------------------------------
*/
bl tsp_main
/* ---------------------------------------------
* Tell TSPD that we are done initialising
* ---------------------------------------------
*/
mov x1, x0
mov x0, #TSP_ENTRY_DONE
smc #0
tsp_entrypoint_panic:
b tsp_entrypoint_panic
endfunc tsp_entrypoint
/* -------------------------------------------
* Table of entrypoint vectors provided to the
* TSPD for the various entrypoints
* -------------------------------------------
*/
func tsp_vector_table
b tsp_yield_smc_entry
b tsp_fast_smc_entry
b tsp_cpu_on_entry
b tsp_cpu_off_entry
b tsp_cpu_resume_entry
b tsp_cpu_suspend_entry
b tsp_sel1_intr_entry
b tsp_system_off_entry
b tsp_system_reset_entry
b tsp_abort_yield_smc_entry
endfunc tsp_vector_table
/*---------------------------------------------
* This entrypoint is used by the TSPD when this
* cpu is to be turned off through a CPU_OFF
* psci call to ask the TSP to perform any
* bookeeping necessary. In the current
* implementation, the TSPD expects the TSP to
* re-initialise its state so nothing is done
* here except for acknowledging the request.
* ---------------------------------------------
*/
func tsp_cpu_off_entry
bl tsp_cpu_off_main
restore_args_call_smc
endfunc tsp_cpu_off_entry
/*---------------------------------------------
* This entrypoint is used by the TSPD when the
* system is about to be switched off (through
* a SYSTEM_OFF psci call) to ask the TSP to
* perform any necessary bookkeeping.
* ---------------------------------------------
*/
func tsp_system_off_entry
bl tsp_system_off_main
restore_args_call_smc
endfunc tsp_system_off_entry
/*---------------------------------------------
* This entrypoint is used by the TSPD when the
* system is about to be reset (through a
* SYSTEM_RESET psci call) to ask the TSP to
* perform any necessary bookkeeping.
* ---------------------------------------------
*/
func tsp_system_reset_entry
bl tsp_system_reset_main
restore_args_call_smc
endfunc tsp_system_reset_entry
/*---------------------------------------------
* This entrypoint is used by the TSPD when this
* cpu is turned on using a CPU_ON psci call to
* ask the TSP to initialise itself i.e. setup
* the mmu, stacks etc. Minimal architectural
* state will be initialised by the TSPD when
* this function is entered i.e. Caches and MMU
* will be turned off, the execution state
* will be aarch64 and exceptions masked.
* ---------------------------------------------
*/
func tsp_cpu_on_entry
/* ---------------------------------------------
* Set the exception vector to something sane.
* ---------------------------------------------
*/
adr x0, tsp_exceptions
msr vbar_el1, x0
isb
/* Enable the SError interrupt */
msr daifclr, #DAIF_ABT_BIT
/* ---------------------------------------------
* Enable the instruction cache, stack pointer
* and data access alignment checks
* ---------------------------------------------
*/
mov x1, #(SCTLR_I_BIT | SCTLR_A_BIT | SCTLR_SA_BIT)
mrs x0, sctlr_el1
orr x0, x0, x1
msr sctlr_el1, x0
isb
/* --------------------------------------------
* Give ourselves a stack whose memory will be
* marked as Normal-IS-WBWA when the MMU is
* enabled.
* --------------------------------------------
*/
bl plat_set_my_stack
/* --------------------------------------------
* Enable the MMU with the DCache disabled. It
* is safe to use stacks allocated in normal
* memory as a result. All memory accesses are
* marked nGnRnE when the MMU is disabled. So
* all the stack writes will make it to memory.
* All memory accesses are marked Non-cacheable
* when the MMU is enabled but D$ is disabled.
* So used stack memory is guaranteed to be
* visible immediately after the MMU is enabled
* Enabling the DCache at the same time as the
* MMU can lead to speculatively fetched and
* possibly stale stack memory being read from
* other caches. This can lead to coherency
* issues.
* --------------------------------------------
*/
mov x0, #DISABLE_DCACHE
bl bl32_plat_enable_mmu
/* ---------------------------------------------
* Enable the Data cache now that the MMU has
* been enabled. The stack has been unwound. It
* will be written first before being read. This
* will invalidate any stale cache lines resi-
* -dent in other caches. We assume that
* interconnect coherency has been enabled for
* this cluster by EL3 firmware.
* ---------------------------------------------
*/
mrs x0, sctlr_el1
orr x0, x0, #SCTLR_C_BIT
msr sctlr_el1, x0
isb
/* ---------------------------------------------
* Enter C runtime to perform any remaining
* book keeping
* ---------------------------------------------
*/
bl tsp_cpu_on_main
restore_args_call_smc
/* Should never reach here */
tsp_cpu_on_entry_panic:
b tsp_cpu_on_entry_panic
endfunc tsp_cpu_on_entry
/*---------------------------------------------
* This entrypoint is used by the TSPD when this
* cpu is to be suspended through a CPU_SUSPEND
* psci call to ask the TSP to perform any
* bookeeping necessary. In the current
* implementation, the TSPD saves and restores
* the EL1 state.
* ---------------------------------------------
*/
func tsp_cpu_suspend_entry
bl tsp_cpu_suspend_main
restore_args_call_smc
endfunc tsp_cpu_suspend_entry
/*-------------------------------------------------
* This entrypoint is used by the TSPD to pass
* control for `synchronously` handling a S-EL1
* Interrupt which was triggered while executing
* in normal world. 'x0' contains a magic number
* which indicates this. TSPD expects control to
* be handed back at the end of interrupt
* processing. This is done through an SMC.
* The handover agreement is:
*
* 1. PSTATE.DAIF are set upon entry. 'x1' has
* the ELR_EL3 from the non-secure state.
* 2. TSP has to preserve the callee saved
* general purpose registers, SP_EL1/EL0 and
* LR.
* 3. TSP has to preserve the system and vfp
* registers (if applicable).
* 4. TSP can use 'x0-x18' to enable its C
* runtime.
* 5. TSP returns to TSPD using an SMC with
* 'x0' = TSP_HANDLED_S_EL1_INTR
* ------------------------------------------------
*/
func tsp_sel1_intr_entry
#if DEBUG
mov_imm x2, TSP_HANDLE_SEL1_INTR_AND_RETURN
cmp x0, x2
b.ne tsp_sel1_int_entry_panic
#endif
/*-------------------------------------------------
* Save any previous context needed to perform
* an exception return from S-EL1 e.g. context
* from a previous Non secure Interrupt.
* Update statistics and handle the S-EL1
* interrupt before returning to the TSPD.
* IRQ/FIQs are not enabled since that will
* complicate the implementation. Execution
* will be transferred back to the normal world
* in any case. The handler can return 0
* if the interrupt was handled or TSP_PREEMPTED
* if the expected interrupt was preempted
* by an interrupt that should be handled in EL3
* e.g. Group 0 interrupt in GICv3. In both
* the cases switch to EL3 using SMC with id
* TSP_HANDLED_S_EL1_INTR. Any other return value
* from the handler will result in panic.
* ------------------------------------------------
*/
save_eret_context x2 x3
bl tsp_update_sync_sel1_intr_stats
bl tsp_common_int_handler
/* Check if the S-EL1 interrupt has been handled */
cbnz x0, tsp_sel1_intr_check_preemption
b tsp_sel1_intr_return
tsp_sel1_intr_check_preemption:
/* Check if the S-EL1 interrupt has been preempted */
mov_imm x1, TSP_PREEMPTED
cmp x0, x1
b.ne tsp_sel1_int_entry_panic
tsp_sel1_intr_return:
mov_imm x0, TSP_HANDLED_S_EL1_INTR
restore_eret_context x2 x3
smc #0
/* Should never reach here */
tsp_sel1_int_entry_panic:
no_ret plat_panic_handler
endfunc tsp_sel1_intr_entry
/*---------------------------------------------
* This entrypoint is used by the TSPD when this
* cpu resumes execution after an earlier
* CPU_SUSPEND psci call to ask the TSP to
* restore its saved context. In the current
* implementation, the TSPD saves and restores
* EL1 state so nothing is done here apart from
* acknowledging the request.
* ---------------------------------------------
*/
func tsp_cpu_resume_entry
bl tsp_cpu_resume_main
restore_args_call_smc
/* Should never reach here */
no_ret plat_panic_handler
endfunc tsp_cpu_resume_entry
/*---------------------------------------------
* This entrypoint is used by the TSPD to ask
* the TSP to service a fast smc request.
* ---------------------------------------------
*/
func tsp_fast_smc_entry
bl tsp_smc_handler
restore_args_call_smc
/* Should never reach here */
no_ret plat_panic_handler
endfunc tsp_fast_smc_entry
/*---------------------------------------------
* This entrypoint is used by the TSPD to ask
* the TSP to service a Yielding SMC request.
* We will enable preemption during execution
* of tsp_smc_handler.
* ---------------------------------------------
*/
func tsp_yield_smc_entry
msr daifclr, #DAIF_FIQ_BIT | DAIF_IRQ_BIT
bl tsp_smc_handler
msr daifset, #DAIF_FIQ_BIT | DAIF_IRQ_BIT
restore_args_call_smc
/* Should never reach here */
no_ret plat_panic_handler
endfunc tsp_yield_smc_entry
/*---------------------------------------------------------------------
* This entrypoint is used by the TSPD to abort a pre-empted Yielding
* SMC. It could be on behalf of non-secure world or because a CPU
* suspend/CPU off request needs to abort the preempted SMC.
* --------------------------------------------------------------------
*/
func tsp_abort_yield_smc_entry
/*
* Exceptions masking is already done by the TSPD when entering this
* hook so there is no need to do it here.
*/
/* Reset the stack used by the pre-empted SMC */
bl plat_set_my_stack
/*
* Allow some cleanup such as releasing locks.
*/
bl tsp_abort_smc_handler
restore_args_call_smc
/* Should never reach here */
bl plat_panic_handler
endfunc tsp_abort_yield_smc_entry

View File

@@ -0,0 +1,163 @@
/*
* Copyright (c) 2013-2016, ARM Limited and Contributors. All rights reserved.
*
* SPDX-License-Identifier: BSD-3-Clause
*/
#include <arch.h>
#include <asm_macros.S>
#include <bl_common.h>
#include <tsp.h>
/* ----------------------------------------------------
* The caller-saved registers x0-x18 and LR are saved
* here.
* ----------------------------------------------------
*/
#define SCRATCH_REG_SIZE #(20 * 8)
.macro save_caller_regs_and_lr
sub sp, sp, SCRATCH_REG_SIZE
stp x0, x1, [sp]
stp x2, x3, [sp, #0x10]
stp x4, x5, [sp, #0x20]
stp x6, x7, [sp, #0x30]
stp x8, x9, [sp, #0x40]
stp x10, x11, [sp, #0x50]
stp x12, x13, [sp, #0x60]
stp x14, x15, [sp, #0x70]
stp x16, x17, [sp, #0x80]
stp x18, x30, [sp, #0x90]
.endm
.macro restore_caller_regs_and_lr
ldp x0, x1, [sp]
ldp x2, x3, [sp, #0x10]
ldp x4, x5, [sp, #0x20]
ldp x6, x7, [sp, #0x30]
ldp x8, x9, [sp, #0x40]
ldp x10, x11, [sp, #0x50]
ldp x12, x13, [sp, #0x60]
ldp x14, x15, [sp, #0x70]
ldp x16, x17, [sp, #0x80]
ldp x18, x30, [sp, #0x90]
add sp, sp, SCRATCH_REG_SIZE
.endm
/* ----------------------------------------------------
* Common TSP interrupt handling routine
* ----------------------------------------------------
*/
.macro handle_tsp_interrupt label
/* Enable the SError interrupt */
msr daifclr, #DAIF_ABT_BIT
save_caller_regs_and_lr
bl tsp_common_int_handler
cbz x0, interrupt_exit_\label
/*
* This interrupt was not targetted to S-EL1 so send it to
* the monitor and wait for execution to resume.
*/
smc #0
interrupt_exit_\label:
restore_caller_regs_and_lr
eret
.endm
.globl tsp_exceptions
/* -----------------------------------------------------
* TSP exception handlers.
* -----------------------------------------------------
*/
vector_base tsp_exceptions
/* -----------------------------------------------------
* Current EL with _sp_el0 : 0x0 - 0x200. No exceptions
* are expected and treated as irrecoverable errors.
* -----------------------------------------------------
*/
vector_entry sync_exception_sp_el0
b plat_panic_handler
check_vector_size sync_exception_sp_el0
vector_entry irq_sp_el0
b plat_panic_handler
check_vector_size irq_sp_el0
vector_entry fiq_sp_el0
b plat_panic_handler
check_vector_size fiq_sp_el0
vector_entry serror_sp_el0
b plat_panic_handler
check_vector_size serror_sp_el0
/* -----------------------------------------------------
* Current EL with SPx: 0x200 - 0x400. Only IRQs/FIQs
* are expected and handled
* -----------------------------------------------------
*/
vector_entry sync_exception_sp_elx
b plat_panic_handler
check_vector_size sync_exception_sp_elx
vector_entry irq_sp_elx
handle_tsp_interrupt irq_sp_elx
check_vector_size irq_sp_elx
vector_entry fiq_sp_elx
handle_tsp_interrupt fiq_sp_elx
check_vector_size fiq_sp_elx
vector_entry serror_sp_elx
b plat_panic_handler
check_vector_size serror_sp_elx
/* -----------------------------------------------------
* Lower EL using AArch64 : 0x400 - 0x600. No exceptions
* are handled since TSP does not implement a lower EL
* -----------------------------------------------------
*/
vector_entry sync_exception_aarch64
b plat_panic_handler
check_vector_size sync_exception_aarch64
vector_entry irq_aarch64
b plat_panic_handler
check_vector_size irq_aarch64
vector_entry fiq_aarch64
b plat_panic_handler
check_vector_size fiq_aarch64
vector_entry serror_aarch64
b plat_panic_handler
check_vector_size serror_aarch64
/* -----------------------------------------------------
* Lower EL using AArch32 : 0x600 - 0x800. No exceptions
* handled since the TSP does not implement a lower EL.
* -----------------------------------------------------
*/
vector_entry sync_exception_aarch32
b plat_panic_handler
check_vector_size sync_exception_aarch32
vector_entry irq_aarch32
b plat_panic_handler
check_vector_size irq_aarch32
vector_entry fiq_aarch32
b plat_panic_handler
check_vector_size fiq_aarch32
vector_entry serror_aarch32
b plat_panic_handler
check_vector_size serror_aarch32

View File

@@ -0,0 +1,39 @@
/*
* Copyright (c) 2013-2014, ARM Limited and Contributors. All rights reserved.
*
* SPDX-License-Identifier: BSD-3-Clause
*/
#include <asm_macros.S>
#include <tsp.h>
.globl tsp_get_magic
/*
* This function raises an SMC to retrieve arguments from secure
* monitor/dispatcher, saves the returned arguments the array received in x0,
* and then returns to the caller
*/
func tsp_get_magic
/* Save address to stack */
stp x0, xzr, [sp, #-16]!
/* Load arguments */
ldr w0, _tsp_fid_get_magic
/* Raise SMC */
smc #0
/* Restore address from stack */
ldp x4, xzr, [sp], #16
/* Store returned arguments to the array */
stp x0, x1, [x4, #0]
ret
endfunc tsp_get_magic
.align 2
_tsp_fid_get_magic:
.word TSP_GET_ARGS

View File

@@ -0,0 +1,139 @@
/*
* Copyright (c) 2013-2018, ARM Limited and Contributors. All rights reserved.
*
* SPDX-License-Identifier: BSD-3-Clause
*/
#include <platform_def.h>
#include <xlat_tables_defs.h>
OUTPUT_FORMAT(PLATFORM_LINKER_FORMAT)
OUTPUT_ARCH(PLATFORM_LINKER_ARCH)
ENTRY(tsp_entrypoint)
MEMORY {
RAM (rwx): ORIGIN = TSP_SEC_MEM_BASE, LENGTH = TSP_SEC_MEM_SIZE
}
SECTIONS
{
. = BL32_BASE;
ASSERT(. == ALIGN(PAGE_SIZE),
"BL32_BASE address is not aligned on a page boundary.")
#if SEPARATE_CODE_AND_RODATA
.text . : {
__TEXT_START__ = .;
*tsp_entrypoint.o(.text*)
*(.text*)
*(.vectors)
. = NEXT(PAGE_SIZE);
__TEXT_END__ = .;
} >RAM
.rodata . : {
__RODATA_START__ = .;
*(.rodata*)
. = NEXT(PAGE_SIZE);
__RODATA_END__ = .;
} >RAM
#else
ro . : {
__RO_START__ = .;
*tsp_entrypoint.o(.text*)
*(.text*)
*(.rodata*)
*(.vectors)
__RO_END_UNALIGNED__ = .;
/*
* Memory page(s) mapped to this section will be marked as
* read-only, executable. No RW data from the next section must
* creep in. Ensure the rest of the current memory page is unused.
*/
. = NEXT(PAGE_SIZE);
__RO_END__ = .;
} >RAM
#endif
/*
* Define a linker symbol to mark start of the RW memory area for this
* image.
*/
__RW_START__ = . ;
.data . : {
__DATA_START__ = .;
*(.data*)
__DATA_END__ = .;
} >RAM
#ifdef TSP_PROGBITS_LIMIT
ASSERT(. <= TSP_PROGBITS_LIMIT, "TSP progbits has exceeded its limit.")
#endif
stacks (NOLOAD) : {
__STACKS_START__ = .;
*(tzfw_normal_stacks)
__STACKS_END__ = .;
} >RAM
/*
* The .bss section gets initialised to 0 at runtime.
* Its base address should be 16-byte aligned for better performance of the
* zero-initialization code.
*/
.bss : ALIGN(16) {
__BSS_START__ = .;
*(SORT_BY_ALIGNMENT(.bss*))
*(COMMON)
__BSS_END__ = .;
} >RAM
/*
* The xlat_table section is for full, aligned page tables (4K).
* Removing them from .bss avoids forcing 4K alignment on
* the .bss section. The tables are initialized to zero by the translation
* tables library.
*/
xlat_table (NOLOAD) : {
*(xlat_table)
} >RAM
#if USE_COHERENT_MEM
/*
* The base address of the coherent memory section must be page-aligned (4K)
* to guarantee that the coherent data are stored on their own pages and
* are not mixed with normal data. This is required to set up the correct
* memory attributes for the coherent data page tables.
*/
coherent_ram (NOLOAD) : ALIGN(PAGE_SIZE) {
__COHERENT_RAM_START__ = .;
*(tzfw_coherent_mem)
__COHERENT_RAM_END_UNALIGNED__ = .;
/*
* Memory page(s) mapped to this section will be marked
* as device memory. No other unexpected data must creep in.
* Ensure the rest of the current memory page is unused.
*/
. = NEXT(PAGE_SIZE);
__COHERENT_RAM_END__ = .;
} >RAM
#endif
/*
* Define a linker symbol to mark the end of the RW memory area for this
* image.
*/
__RW_END__ = .;
__BL32_END__ = .;
__BSS_SIZE__ = SIZEOF(.bss);
#if USE_COHERENT_MEM
__COHERENT_RAM_UNALIGNED_SIZE__ =
__COHERENT_RAM_END_UNALIGNED__ - __COHERENT_RAM_START__;
#endif
ASSERT(. <= BL32_LIMIT, "BL32 image has exceeded its limit.")
}

View File

@@ -0,0 +1,36 @@
#
# Copyright (c) 2013-2016, ARM Limited and Contributors. All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
#
INCLUDES += -Iinclude/bl32/tsp
BL32_SOURCES += bl32/tsp/tsp_main.c \
bl32/tsp/aarch64/tsp_entrypoint.S \
bl32/tsp/aarch64/tsp_exceptions.S \
bl32/tsp/aarch64/tsp_request.S \
bl32/tsp/tsp_interrupt.c \
bl32/tsp/tsp_timer.c \
common/aarch64/early_exceptions.S \
lib/locks/exclusive/aarch64/spinlock.S
BL32_LINKERFILE := bl32/tsp/tsp.ld.S
# This flag determines if the TSPD initializes BL32 in tspd_init() (synchronous
# method) or configures BL31 to pass control to BL32 instead of BL33
# (asynchronous method).
TSP_INIT_ASYNC := 0
$(eval $(call assert_boolean,TSP_INIT_ASYNC))
$(eval $(call add_define,TSP_INIT_ASYNC))
# Include the platform-specific TSP Makefile
# If no platform-specific TSP Makefile exists, it means TSP is not supported
# on this platform.
TSP_PLAT_MAKEFILE := $(wildcard ${PLAT_DIR}/tsp/tsp-${PLAT}.mk)
ifeq (,${TSP_PLAT_MAKEFILE})
$(error TSP is not supported on platform ${PLAT})
else
include ${TSP_PLAT_MAKEFILE}
endif

View File

@@ -0,0 +1,113 @@
/*
* Copyright (c) 2014-2015, ARM Limited and Contributors. All rights reserved.
*
* SPDX-License-Identifier: BSD-3-Clause
*/
#include <arch_helpers.h>
#include <assert.h>
#include <debug.h>
#include <platform.h>
#include <platform_def.h>
#include <tsp.h>
#include "tsp_private.h"
/*******************************************************************************
* This function updates the TSP statistics for S-EL1 interrupts handled
* synchronously i.e the ones that have been handed over by the TSPD. It also
* keeps count of the number of times control was passed back to the TSPD
* after handling the interrupt. In the future it will be possible that the
* TSPD hands over an S-EL1 interrupt to the TSP but does not expect it to
* return execution. This statistic will be useful to distinguish between these
* two models of synchronous S-EL1 interrupt handling. The 'elr_el3' parameter
* contains the address of the instruction in normal world where this S-EL1
* interrupt was generated.
******************************************************************************/
void tsp_update_sync_sel1_intr_stats(uint32_t type, uint64_t elr_el3)
{
uint32_t linear_id = plat_my_core_pos();
tsp_stats[linear_id].sync_sel1_intr_count++;
if (type == TSP_HANDLE_SEL1_INTR_AND_RETURN)
tsp_stats[linear_id].sync_sel1_intr_ret_count++;
#if LOG_LEVEL >= LOG_LEVEL_VERBOSE
spin_lock(&console_lock);
VERBOSE("TSP: cpu 0x%lx sync s-el1 interrupt request from 0x%llx\n",
read_mpidr(), elr_el3);
VERBOSE("TSP: cpu 0x%lx: %d sync s-el1 interrupt requests,"
" %d sync s-el1 interrupt returns\n",
read_mpidr(),
tsp_stats[linear_id].sync_sel1_intr_count,
tsp_stats[linear_id].sync_sel1_intr_ret_count);
spin_unlock(&console_lock);
#endif
}
/******************************************************************************
* This function is invoked when a non S-EL1 interrupt is received and causes
* the preemption of TSP. This function returns TSP_PREEMPTED and results
* in the control being handed over to EL3 for handling the interrupt.
*****************************************************************************/
int32_t tsp_handle_preemption(void)
{
uint32_t linear_id = plat_my_core_pos();
tsp_stats[linear_id].preempt_intr_count++;
#if LOG_LEVEL >= LOG_LEVEL_VERBOSE
spin_lock(&console_lock);
VERBOSE("TSP: cpu 0x%lx: %d preempt interrupt requests\n",
read_mpidr(), tsp_stats[linear_id].preempt_intr_count);
spin_unlock(&console_lock);
#endif
return TSP_PREEMPTED;
}
/*******************************************************************************
* TSP interrupt handler is called as a part of both synchronous and
* asynchronous handling of TSP interrupts. Currently the physical timer
* interrupt is the only S-EL1 interrupt that this handler expects. It returns
* 0 upon successfully handling the expected interrupt and all other
* interrupts are treated as normal world or EL3 interrupts.
******************************************************************************/
int32_t tsp_common_int_handler(void)
{
uint32_t linear_id = plat_my_core_pos(), id;
/*
* Get the highest priority pending interrupt id and see if it is the
* secure physical generic timer interrupt in which case, handle it.
* Otherwise throw this interrupt at the EL3 firmware.
*
* There is a small time window between reading the highest priority
* pending interrupt and acknowledging it during which another
* interrupt of higher priority could become the highest pending
* interrupt. This is not expected to happen currently for TSP.
*/
id = plat_ic_get_pending_interrupt_id();
/* TSP can only handle the secure physical timer interrupt */
if (id != TSP_IRQ_SEC_PHY_TIMER)
return tsp_handle_preemption();
/*
* Acknowledge and handle the secure timer interrupt. Also sanity check
* if it has been preempted by another interrupt through an assertion.
*/
id = plat_ic_acknowledge_interrupt();
assert(id == TSP_IRQ_SEC_PHY_TIMER);
tsp_generic_timer_handler();
plat_ic_end_of_interrupt(id);
/* Update the statistics and print some messages */
tsp_stats[linear_id].sel1_intr_count++;
#if LOG_LEVEL >= LOG_LEVEL_VERBOSE
spin_lock(&console_lock);
VERBOSE("TSP: cpu 0x%lx handled S-EL1 interrupt %d\n",
read_mpidr(), id);
VERBOSE("TSP: cpu 0x%lx: %d S-EL1 requests\n",
read_mpidr(), tsp_stats[linear_id].sel1_intr_count);
spin_unlock(&console_lock);
#endif
return 0;
}

View File

@@ -0,0 +1,411 @@
/*
* Copyright (c) 2013-2017, ARM Limited and Contributors. All rights reserved.
*
* SPDX-License-Identifier: BSD-3-Clause
*/
#include <arch_helpers.h>
#include <bl_common.h>
#include <debug.h>
#include <platform.h>
#include <platform_def.h>
#include <platform_tsp.h>
#include <spinlock.h>
#include <tsp.h>
#include "tsp_private.h"
/*******************************************************************************
* Lock to control access to the console
******************************************************************************/
spinlock_t console_lock;
/*******************************************************************************
* Per cpu data structure to populate parameters for an SMC in C code and use
* a pointer to this structure in assembler code to populate x0-x7
******************************************************************************/
static tsp_args_t tsp_smc_args[PLATFORM_CORE_COUNT];
/*******************************************************************************
* Per cpu data structure to keep track of TSP activity
******************************************************************************/
work_statistics_t tsp_stats[PLATFORM_CORE_COUNT];
/*******************************************************************************
* The TSP memory footprint starts at address BL32_BASE and ends with the
* linker symbol __BL32_END__. Use these addresses to compute the TSP image
* size.
******************************************************************************/
#define BL32_TOTAL_LIMIT (unsigned long)(&__BL32_END__)
#define BL32_TOTAL_SIZE (BL32_TOTAL_LIMIT - (unsigned long) BL32_BASE)
static tsp_args_t *set_smc_args(uint64_t arg0,
uint64_t arg1,
uint64_t arg2,
uint64_t arg3,
uint64_t arg4,
uint64_t arg5,
uint64_t arg6,
uint64_t arg7)
{
uint32_t linear_id;
tsp_args_t *pcpu_smc_args;
/*
* Return to Secure Monitor by raising an SMC. The results of the
* service are passed as an arguments to the SMC
*/
linear_id = plat_my_core_pos();
pcpu_smc_args = &tsp_smc_args[linear_id];
write_sp_arg(pcpu_smc_args, TSP_ARG0, arg0);
write_sp_arg(pcpu_smc_args, TSP_ARG1, arg1);
write_sp_arg(pcpu_smc_args, TSP_ARG2, arg2);
write_sp_arg(pcpu_smc_args, TSP_ARG3, arg3);
write_sp_arg(pcpu_smc_args, TSP_ARG4, arg4);
write_sp_arg(pcpu_smc_args, TSP_ARG5, arg5);
write_sp_arg(pcpu_smc_args, TSP_ARG6, arg6);
write_sp_arg(pcpu_smc_args, TSP_ARG7, arg7);
return pcpu_smc_args;
}
/*******************************************************************************
* TSP main entry point where it gets the opportunity to initialize its secure
* state/applications. Once the state is initialized, it must return to the
* SPD with a pointer to the 'tsp_vector_table' jump table.
******************************************************************************/
uint64_t tsp_main(void)
{
NOTICE("TSP: %s\n", version_string);
NOTICE("TSP: %s\n", build_message);
INFO("TSP: Total memory base : 0x%lx\n", (unsigned long) BL32_BASE);
INFO("TSP: Total memory size : 0x%lx bytes\n", BL32_TOTAL_SIZE);
uint32_t linear_id = plat_my_core_pos();
/* Initialize the platform */
tsp_platform_setup();
/* Initialize secure/applications state here */
tsp_generic_timer_start();
/* Update this cpu's statistics */
tsp_stats[linear_id].smc_count++;
tsp_stats[linear_id].eret_count++;
tsp_stats[linear_id].cpu_on_count++;
#if LOG_LEVEL >= LOG_LEVEL_INFO
spin_lock(&console_lock);
INFO("TSP: cpu 0x%lx: %d smcs, %d erets %d cpu on requests\n",
read_mpidr(),
tsp_stats[linear_id].smc_count,
tsp_stats[linear_id].eret_count,
tsp_stats[linear_id].cpu_on_count);
spin_unlock(&console_lock);
#endif
return (uint64_t) &tsp_vector_table;
}
/*******************************************************************************
* This function performs any remaining book keeping in the test secure payload
* after this cpu's architectural state has been setup in response to an earlier
* psci cpu_on request.
******************************************************************************/
tsp_args_t *tsp_cpu_on_main(void)
{
uint32_t linear_id = plat_my_core_pos();
/* Initialize secure/applications state here */
tsp_generic_timer_start();
/* Update this cpu's statistics */
tsp_stats[linear_id].smc_count++;
tsp_stats[linear_id].eret_count++;
tsp_stats[linear_id].cpu_on_count++;
#if LOG_LEVEL >= LOG_LEVEL_INFO
spin_lock(&console_lock);
INFO("TSP: cpu 0x%lx turned on\n", read_mpidr());
INFO("TSP: cpu 0x%lx: %d smcs, %d erets %d cpu on requests\n",
read_mpidr(),
tsp_stats[linear_id].smc_count,
tsp_stats[linear_id].eret_count,
tsp_stats[linear_id].cpu_on_count);
spin_unlock(&console_lock);
#endif
/* Indicate to the SPD that we have completed turned ourselves on */
return set_smc_args(TSP_ON_DONE, 0, 0, 0, 0, 0, 0, 0);
}
/*******************************************************************************
* This function performs any remaining book keeping in the test secure payload
* before this cpu is turned off in response to a psci cpu_off request.
******************************************************************************/
tsp_args_t *tsp_cpu_off_main(uint64_t arg0,
uint64_t arg1,
uint64_t arg2,
uint64_t arg3,
uint64_t arg4,
uint64_t arg5,
uint64_t arg6,
uint64_t arg7)
{
uint32_t linear_id = plat_my_core_pos();
/*
* This cpu is being turned off, so disable the timer to prevent the
* secure timer interrupt from interfering with power down. A pending
* interrupt will be lost but we do not care as we are turning off.
*/
tsp_generic_timer_stop();
/* Update this cpu's statistics */
tsp_stats[linear_id].smc_count++;
tsp_stats[linear_id].eret_count++;
tsp_stats[linear_id].cpu_off_count++;
#if LOG_LEVEL >= LOG_LEVEL_INFO
spin_lock(&console_lock);
INFO("TSP: cpu 0x%lx off request\n", read_mpidr());
INFO("TSP: cpu 0x%lx: %d smcs, %d erets %d cpu off requests\n",
read_mpidr(),
tsp_stats[linear_id].smc_count,
tsp_stats[linear_id].eret_count,
tsp_stats[linear_id].cpu_off_count);
spin_unlock(&console_lock);
#endif
/* Indicate to the SPD that we have completed this request */
return set_smc_args(TSP_OFF_DONE, 0, 0, 0, 0, 0, 0, 0);
}
/*******************************************************************************
* This function performs any book keeping in the test secure payload before
* this cpu's architectural state is saved in response to an earlier psci
* cpu_suspend request.
******************************************************************************/
tsp_args_t *tsp_cpu_suspend_main(uint64_t arg0,
uint64_t arg1,
uint64_t arg2,
uint64_t arg3,
uint64_t arg4,
uint64_t arg5,
uint64_t arg6,
uint64_t arg7)
{
uint32_t linear_id = plat_my_core_pos();
/*
* Save the time context and disable it to prevent the secure timer
* interrupt from interfering with wakeup from the suspend state.
*/
tsp_generic_timer_save();
tsp_generic_timer_stop();
/* Update this cpu's statistics */
tsp_stats[linear_id].smc_count++;
tsp_stats[linear_id].eret_count++;
tsp_stats[linear_id].cpu_suspend_count++;
#if LOG_LEVEL >= LOG_LEVEL_INFO
spin_lock(&console_lock);
INFO("TSP: cpu 0x%lx: %d smcs, %d erets %d cpu suspend requests\n",
read_mpidr(),
tsp_stats[linear_id].smc_count,
tsp_stats[linear_id].eret_count,
tsp_stats[linear_id].cpu_suspend_count);
spin_unlock(&console_lock);
#endif
/* Indicate to the SPD that we have completed this request */
return set_smc_args(TSP_SUSPEND_DONE, 0, 0, 0, 0, 0, 0, 0);
}
/*******************************************************************************
* This function performs any book keeping in the test secure payload after this
* cpu's architectural state has been restored after wakeup from an earlier psci
* cpu_suspend request.
******************************************************************************/
tsp_args_t *tsp_cpu_resume_main(uint64_t max_off_pwrlvl,
uint64_t arg1,
uint64_t arg2,
uint64_t arg3,
uint64_t arg4,
uint64_t arg5,
uint64_t arg6,
uint64_t arg7)
{
uint32_t linear_id = plat_my_core_pos();
/* Restore the generic timer context */
tsp_generic_timer_restore();
/* Update this cpu's statistics */
tsp_stats[linear_id].smc_count++;
tsp_stats[linear_id].eret_count++;
tsp_stats[linear_id].cpu_resume_count++;
#if LOG_LEVEL >= LOG_LEVEL_INFO
spin_lock(&console_lock);
INFO("TSP: cpu 0x%lx resumed. maximum off power level %lld\n",
read_mpidr(), max_off_pwrlvl);
INFO("TSP: cpu 0x%lx: %d smcs, %d erets %d cpu suspend requests\n",
read_mpidr(),
tsp_stats[linear_id].smc_count,
tsp_stats[linear_id].eret_count,
tsp_stats[linear_id].cpu_suspend_count);
spin_unlock(&console_lock);
#endif
/* Indicate to the SPD that we have completed this request */
return set_smc_args(TSP_RESUME_DONE, 0, 0, 0, 0, 0, 0, 0);
}
/*******************************************************************************
* This function performs any remaining bookkeeping in the test secure payload
* before the system is switched off (in response to a psci SYSTEM_OFF request)
******************************************************************************/
tsp_args_t *tsp_system_off_main(uint64_t arg0,
uint64_t arg1,
uint64_t arg2,
uint64_t arg3,
uint64_t arg4,
uint64_t arg5,
uint64_t arg6,
uint64_t arg7)
{
uint32_t linear_id = plat_my_core_pos();
/* Update this cpu's statistics */
tsp_stats[linear_id].smc_count++;
tsp_stats[linear_id].eret_count++;
#if LOG_LEVEL >= LOG_LEVEL_INFO
spin_lock(&console_lock);
INFO("TSP: cpu 0x%lx SYSTEM_OFF request\n", read_mpidr());
INFO("TSP: cpu 0x%lx: %d smcs, %d erets requests\n", read_mpidr(),
tsp_stats[linear_id].smc_count,
tsp_stats[linear_id].eret_count);
spin_unlock(&console_lock);
#endif
/* Indicate to the SPD that we have completed this request */
return set_smc_args(TSP_SYSTEM_OFF_DONE, 0, 0, 0, 0, 0, 0, 0);
}
/*******************************************************************************
* This function performs any remaining bookkeeping in the test secure payload
* before the system is reset (in response to a psci SYSTEM_RESET request)
******************************************************************************/
tsp_args_t *tsp_system_reset_main(uint64_t arg0,
uint64_t arg1,
uint64_t arg2,
uint64_t arg3,
uint64_t arg4,
uint64_t arg5,
uint64_t arg6,
uint64_t arg7)
{
uint32_t linear_id = plat_my_core_pos();
/* Update this cpu's statistics */
tsp_stats[linear_id].smc_count++;
tsp_stats[linear_id].eret_count++;
#if LOG_LEVEL >= LOG_LEVEL_INFO
spin_lock(&console_lock);
INFO("TSP: cpu 0x%lx SYSTEM_RESET request\n", read_mpidr());
INFO("TSP: cpu 0x%lx: %d smcs, %d erets requests\n", read_mpidr(),
tsp_stats[linear_id].smc_count,
tsp_stats[linear_id].eret_count);
spin_unlock(&console_lock);
#endif
/* Indicate to the SPD that we have completed this request */
return set_smc_args(TSP_SYSTEM_RESET_DONE, 0, 0, 0, 0, 0, 0, 0);
}
/*******************************************************************************
* TSP fast smc handler. The secure monitor jumps to this function by
* doing the ERET after populating X0-X7 registers. The arguments are received
* in the function arguments in order. Once the service is rendered, this
* function returns to Secure Monitor by raising SMC.
******************************************************************************/
tsp_args_t *tsp_smc_handler(uint64_t func,
uint64_t arg1,
uint64_t arg2,
uint64_t arg3,
uint64_t arg4,
uint64_t arg5,
uint64_t arg6,
uint64_t arg7)
{
uint64_t results[2];
uint64_t service_args[2];
uint32_t linear_id = plat_my_core_pos();
/* Update this cpu's statistics */
tsp_stats[linear_id].smc_count++;
tsp_stats[linear_id].eret_count++;
INFO("TSP: cpu 0x%lx received %s smc 0x%llx\n", read_mpidr(),
((func >> 31) & 1) == 1 ? "fast" : "yielding",
func);
INFO("TSP: cpu 0x%lx: %d smcs, %d erets\n", read_mpidr(),
tsp_stats[linear_id].smc_count,
tsp_stats[linear_id].eret_count);
/* Render secure services and obtain results here */
results[0] = arg1;
results[1] = arg2;
/*
* Request a service back from dispatcher/secure monitor. This call
* return and thereafter resume exectuion
*/
tsp_get_magic(service_args);
/* Determine the function to perform based on the function ID */
switch (TSP_BARE_FID(func)) {
case TSP_ADD:
results[0] += service_args[0];
results[1] += service_args[1];
break;
case TSP_SUB:
results[0] -= service_args[0];
results[1] -= service_args[1];
break;
case TSP_MUL:
results[0] *= service_args[0];
results[1] *= service_args[1];
break;
case TSP_DIV:
results[0] /= service_args[0] ? service_args[0] : 1;
results[1] /= service_args[1] ? service_args[1] : 1;
break;
default:
break;
}
return set_smc_args(func, 0,
results[0],
results[1],
0, 0, 0, 0);
}
/*******************************************************************************
* TSP smc abort handler. This function is called when aborting a preemtped
* yielding SMC request. It should cleanup all resources owned by the SMC
* handler such as locks or dynamically allocated memory so following SMC
* request are executed in a clean environment.
******************************************************************************/
tsp_args_t *tsp_abort_smc_handler(uint64_t func,
uint64_t arg1,
uint64_t arg2,
uint64_t arg3,
uint64_t arg4,
uint64_t arg5,
uint64_t arg6,
uint64_t arg7)
{
return set_smc_args(TSP_ABORT_DONE, 0, 0, 0, 0, 0, 0, 0);
}

View File

@@ -0,0 +1,153 @@
/*
* Copyright (c) 2014-2018, ARM Limited and Contributors. All rights reserved.
*
* SPDX-License-Identifier: BSD-3-Clause
*/
#ifndef __TSP_PRIVATE_H__
#define __TSP_PRIVATE_H__
/* Definitions to help the assembler access the SMC/ERET args structure */
#define TSP_ARGS_SIZE 0x40
#define TSP_ARG0 0x0
#define TSP_ARG1 0x8
#define TSP_ARG2 0x10
#define TSP_ARG3 0x18
#define TSP_ARG4 0x20
#define TSP_ARG5 0x28
#define TSP_ARG6 0x30
#define TSP_ARG7 0x38
#define TSP_ARGS_END 0x40
#ifndef __ASSEMBLY__
#include <cassert.h>
#include <platform_def.h> /* For CACHE_WRITEBACK_GRANULE */
#include <spinlock.h>
#include <stdint.h>
#include <tsp.h>
typedef struct work_statistics {
/* Number of s-el1 interrupts on this cpu */
uint32_t sel1_intr_count;
/* Number of non s-el1 interrupts on this cpu which preempted TSP */
uint32_t preempt_intr_count;
/* Number of sync s-el1 interrupts on this cpu */
uint32_t sync_sel1_intr_count;
/* Number of s-el1 interrupts returns on this cpu */
uint32_t sync_sel1_intr_ret_count;
uint32_t smc_count; /* Number of returns on this cpu */
uint32_t eret_count; /* Number of entries on this cpu */
uint32_t cpu_on_count; /* Number of cpu on requests */
uint32_t cpu_off_count; /* Number of cpu off requests */
uint32_t cpu_suspend_count; /* Number of cpu suspend requests */
uint32_t cpu_resume_count; /* Number of cpu resume requests */
} __aligned(CACHE_WRITEBACK_GRANULE) work_statistics_t;
typedef struct tsp_args {
uint64_t _regs[TSP_ARGS_END >> 3];
} __aligned(CACHE_WRITEBACK_GRANULE) tsp_args_t;
/* Macros to access members of the above structure using their offsets */
#define read_sp_arg(args, offset) ((args)->_regs[offset >> 3])
#define write_sp_arg(args, offset, val) (((args)->_regs[offset >> 3]) \
= val)
/*
* Ensure that the assembler's view of the size of the tsp_args is the
* same as the compilers
*/
CASSERT(TSP_ARGS_SIZE == sizeof(tsp_args_t), assert_sp_args_size_mismatch);
void tsp_get_magic(uint64_t args[4]);
tsp_args_t *tsp_cpu_resume_main(uint64_t max_off_pwrlvl,
uint64_t arg1,
uint64_t arg2,
uint64_t arg3,
uint64_t arg4,
uint64_t arg5,
uint64_t arg6,
uint64_t arg7);
tsp_args_t *tsp_cpu_suspend_main(uint64_t arg0,
uint64_t arg1,
uint64_t arg2,
uint64_t arg3,
uint64_t arg4,
uint64_t arg5,
uint64_t arg6,
uint64_t arg7);
tsp_args_t *tsp_cpu_on_main(void);
tsp_args_t *tsp_cpu_off_main(uint64_t arg0,
uint64_t arg1,
uint64_t arg2,
uint64_t arg3,
uint64_t arg4,
uint64_t arg5,
uint64_t arg6,
uint64_t arg7);
/* Generic Timer functions */
void tsp_generic_timer_start(void);
void tsp_generic_timer_handler(void);
void tsp_generic_timer_stop(void);
void tsp_generic_timer_save(void);
void tsp_generic_timer_restore(void);
/* S-EL1 interrupt management functions */
void tsp_update_sync_sel1_intr_stats(uint32_t type, uint64_t elr_el3);
/* Data structure to keep track of TSP statistics */
extern spinlock_t console_lock;
extern work_statistics_t tsp_stats[PLATFORM_CORE_COUNT];
/* Vector table of jumps */
extern tsp_vectors_t tsp_vector_table;
/* functions */
int32_t tsp_common_int_handler(void);
int32_t tsp_handle_preemption(void);
tsp_args_t *tsp_abort_smc_handler(uint64_t func,
uint64_t arg1,
uint64_t arg2,
uint64_t arg3,
uint64_t arg4,
uint64_t arg5,
uint64_t arg6,
uint64_t arg7);
tsp_args_t *tsp_smc_handler(uint64_t func,
uint64_t arg1,
uint64_t arg2,
uint64_t arg3,
uint64_t arg4,
uint64_t arg5,
uint64_t arg6,
uint64_t arg7);
tsp_args_t *tsp_system_reset_main(uint64_t arg0,
uint64_t arg1,
uint64_t arg2,
uint64_t arg3,
uint64_t arg4,
uint64_t arg5,
uint64_t arg6,
uint64_t arg7);
tsp_args_t *tsp_system_off_main(uint64_t arg0,
uint64_t arg1,
uint64_t arg2,
uint64_t arg3,
uint64_t arg4,
uint64_t arg5,
uint64_t arg6,
uint64_t arg7);
uint64_t tsp_main(void);
#endif /* __ASSEMBLY__ */
#endif /* __TSP_PRIVATE_H__ */

View File

@@ -0,0 +1,88 @@
/*
* Copyright (c) 2014-2015, ARM Limited and Contributors. All rights reserved.
*
* SPDX-License-Identifier: BSD-3-Clause
*/
#include <arch_helpers.h>
#include <assert.h>
#include <platform.h>
#include "tsp_private.h"
/*******************************************************************************
* Data structure to keep track of per-cpu secure generic timer context across
* power management operations.
******************************************************************************/
typedef struct timer_context {
uint64_t cval;
uint32_t ctl;
} timer_context_t;
static timer_context_t pcpu_timer_context[PLATFORM_CORE_COUNT];
/*******************************************************************************
* This function initializes the generic timer to fire every 0.5 second
******************************************************************************/
void tsp_generic_timer_start(void)
{
uint64_t cval;
uint32_t ctl = 0;
/* The timer will fire every 0.5 second */
cval = read_cntpct_el0() + (read_cntfrq_el0() >> 1);
write_cntps_cval_el1(cval);
/* Enable the secure physical timer */
set_cntp_ctl_enable(ctl);
write_cntps_ctl_el1(ctl);
}
/*******************************************************************************
* This function deasserts the timer interrupt and sets it up again
******************************************************************************/
void tsp_generic_timer_handler(void)
{
/* Ensure that the timer did assert the interrupt */
assert(get_cntp_ctl_istatus(read_cntps_ctl_el1()));
/*
* Disable the timer and reprogram it. The barriers ensure that there is
* no reordering of instructions around the reprogramming code.
*/
isb();
write_cntps_ctl_el1(0);
tsp_generic_timer_start();
isb();
}
/*******************************************************************************
* This function deasserts the timer interrupt prior to cpu power down
******************************************************************************/
void tsp_generic_timer_stop(void)
{
/* Disable the timer */
write_cntps_ctl_el1(0);
}
/*******************************************************************************
* This function saves the timer context prior to cpu suspension
******************************************************************************/
void tsp_generic_timer_save(void)
{
uint32_t linear_id = plat_my_core_pos();
pcpu_timer_context[linear_id].cval = read_cntps_cval_el1();
pcpu_timer_context[linear_id].ctl = read_cntps_ctl_el1();
flush_dcache_range((uint64_t) &pcpu_timer_context[linear_id],
sizeof(pcpu_timer_context[linear_id]));
}
/*******************************************************************************
* This function restores the timer context post cpu resummption
******************************************************************************/
void tsp_generic_timer_restore(void)
{
uint32_t linear_id = plat_my_core_pos();
write_cntps_cval_el1(pcpu_timer_context[linear_id].cval);
write_cntps_ctl_el1(pcpu_timer_context[linear_id].ctl);
}

View File

@@ -0,0 +1,192 @@
/*
* Copyright (c) 2016-2017, ARM Limited and Contributors. All rights reserved.
*
* SPDX-License-Identifier: BSD-3-Clause
*/
#include <arch.h>
#include <asm_macros.S>
.globl asm_assert
.globl do_panic
.globl report_exception
/* Since the max decimal input number is 65536 */
#define MAX_DEC_DIVISOR 10000
/* The offset to add to get ascii for numerals '0 - 9' */
#define ASCII_OFFSET_NUM '0'
.section .rodata.panic_str, "aS"
panic_msg:
.asciz "PANIC at PC : 0x"
panic_end:
.asciz "\r\n"
/***********************************************************
* The common implementation of do_panic for all BL stages
***********************************************************/
func do_panic
/* Have LR copy point to PC at the time of panic */
sub r6, lr, #4
/* Initialize crash console and verify success */
bl plat_crash_console_init
cmp r0, #0
beq 1f
/* Print panic message */
ldr r4, =panic_msg
bl asm_print_str
/* Print LR in hex */
mov r4, r6
bl asm_print_hex
/* Print new line */
ldr r4, =panic_end
bl asm_print_str
bl plat_crash_console_flush
1:
mov lr, r6
b plat_panic_handler
endfunc do_panic
/***********************************************************
* This function is called from the vector table for
* unhandled exceptions. It reads the current mode and
* passes it to platform.
***********************************************************/
func report_exception
mrs r0, cpsr
and r0, #MODE32_MASK
bl plat_report_exception
no_ret plat_panic_handler
endfunc report_exception
#if ENABLE_ASSERTIONS
.section .rodata.assert_str, "aS"
assert_msg1:
.asciz "ASSERT: File "
assert_msg2:
#if ARM_ARCH_MAJOR == 7 && !defined(ARMV7_SUPPORTS_VIRTUALIZATION)
/******************************************************************
* Virtualization comes with the UDIV/SDIV instructions. If missing
* write file line number in hexadecimal format.
******************************************************************/
.asciz " Line 0x"
#else
.asciz " Line "
#endif
/* ---------------------------------------------------------------------------
* Assertion support in assembly.
* The below function helps to support assertions in assembly where we do not
* have a C runtime stack. Arguments to the function are :
* r0 - File name
* r1 - Line no
* Clobber list : lr, r0 - r6
* ---------------------------------------------------------------------------
*/
func asm_assert
#if LOG_LEVEL >= LOG_LEVEL_INFO
/*
* Only print the output if LOG_LEVEL is higher or equal to
* LOG_LEVEL_INFO, which is the default value for builds with DEBUG=1.
*/
/* Stash the parameters already in r0 and r1 */
mov r5, r0
mov r6, r1
/* Initialize crash console and verify success */
bl plat_crash_console_init
cmp r0, #0
beq 1f
/* Print file name */
ldr r4, =assert_msg1
bl asm_print_str
mov r4, r5
bl asm_print_str
/* Print line number string */
ldr r4, =assert_msg2
bl asm_print_str
/* Test for maximum supported line number */
ldr r4, =~0xffff
tst r6, r4
bne 1f
mov r4, r6
#if ARM_ARCH_MAJOR == 7 && !defined(ARMV7_SUPPORTS_VIRTUALIZATION)
/******************************************************************
* Virtualization comes with the UDIV/SDIV instructions. If missing
* write file line number in hexadecimal format.
******************************************************************/
bl asm_print_hex
#else
/* Print line number in decimal */
mov r6, #10 /* Divide by 10 after every loop iteration */
ldr r5, =MAX_DEC_DIVISOR
dec_print_loop:
udiv r0, r4, r5 /* Quotient */
mls r4, r0, r5, r4 /* Remainder */
add r0, r0, #ASCII_OFFSET_NUM /* Convert to ASCII */
bl plat_crash_console_putc
udiv r5, r5, r6 /* Reduce divisor */
cmp r5, #0
bne dec_print_loop
#endif
bl plat_crash_console_flush
1:
#endif /* LOG_LEVEL >= LOG_LEVEL_INFO */
no_ret plat_panic_handler
endfunc asm_assert
#endif /* ENABLE_ASSERTIONS */
/*
* This function prints a string from address in r4
* Clobber: lr, r0 - r4
*/
func asm_print_str
mov r3, lr
1:
ldrb r0, [r4], #0x1
cmp r0, #0
beq 2f
bl plat_crash_console_putc
b 1b
2:
bx r3
endfunc asm_print_str
/*
* This function prints a hexadecimal number in r4.
* In: r4 = the hexadecimal to print.
* Clobber: lr, r0 - r3, r5
*/
func asm_print_hex
mov r3, lr
mov r5, #32 /* No of bits to convert to ascii */
1:
sub r5, r5, #4
lsr r0, r4, r5
and r0, r0, #0xf
cmp r0, #0xa
blo 2f
/* Add by 0x27 in addition to ASCII_OFFSET_NUM
* to get ascii for characters 'a - f'.
*/
add r0, r0, #0x27
2:
add r0, r0, #ASCII_OFFSET_NUM
bl plat_crash_console_putc
cmp r5, #0
bne 1b
bx r3
endfunc asm_print_hex

View File

@@ -0,0 +1,181 @@
/*
* Copyright (c) 2014-2017, ARM Limited and Contributors. All rights reserved.
*
* SPDX-License-Identifier: BSD-3-Clause
*/
#include <arch.h>
#include <asm_macros.S>
#include <debug.h>
.globl asm_print_str
.globl asm_print_hex
.globl asm_assert
.globl do_panic
/* Since the max decimal input number is 65536 */
#define MAX_DEC_DIVISOR 10000
/* The offset to add to get ascii for numerals '0 - 9' */
#define ASCII_OFFSET_NUM 0x30
#if ENABLE_ASSERTIONS
.section .rodata.assert_str, "aS"
assert_msg1:
.asciz "ASSERT: File "
assert_msg2:
.asciz " Line "
/*
* This macro is intended to be used to print the
* line number in decimal. Used by asm_assert macro.
* The max number expected is 65536.
* In: x4 = the decimal to print.
* Clobber: x30, x0, x1, x2, x5, x6
*/
.macro asm_print_line_dec
mov x6, #10 /* Divide by 10 after every loop iteration */
mov x5, #MAX_DEC_DIVISOR
dec_print_loop:
udiv x0, x4, x5 /* Get the quotient */
msub x4, x0, x5, x4 /* Find the remainder */
add x0, x0, #ASCII_OFFSET_NUM /* Convert to ascii */
bl plat_crash_console_putc
udiv x5, x5, x6 /* Reduce divisor */
cbnz x5, dec_print_loop
.endm
/* ---------------------------------------------------------------------------
* Assertion support in assembly.
* The below function helps to support assertions in assembly where we do not
* have a C runtime stack. Arguments to the function are :
* x0 - File name
* x1 - Line no
* Clobber list : x30, x0, x1, x2, x3, x4, x5, x6.
* ---------------------------------------------------------------------------
*/
func asm_assert
#if LOG_LEVEL >= LOG_LEVEL_INFO
/*
* Only print the output if LOG_LEVEL is higher or equal to
* LOG_LEVEL_INFO, which is the default value for builds with DEBUG=1.
*/
mov x5, x0
mov x6, x1
/* Ensure the console is initialized */
bl plat_crash_console_init
/* Check if the console is initialized */
cbz x0, _assert_loop
/* The console is initialized */
adr x4, assert_msg1
bl asm_print_str
mov x4, x5
bl asm_print_str
adr x4, assert_msg2
bl asm_print_str
/* Check if line number higher than max permitted */
tst x6, #~0xffff
b.ne _assert_loop
mov x4, x6
asm_print_line_dec
bl plat_crash_console_flush
_assert_loop:
#endif /* LOG_LEVEL >= LOG_LEVEL_INFO */
no_ret plat_panic_handler
endfunc asm_assert
#endif /* ENABLE_ASSERTIONS */
/*
* This function prints a string from address in x4.
* In: x4 = pointer to string.
* Clobber: x30, x0, x1, x2, x3
*/
func asm_print_str
mov x3, x30
1:
ldrb w0, [x4], #0x1
cbz x0, 2f
bl plat_crash_console_putc
b 1b
2:
ret x3
endfunc asm_print_str
/*
* This function prints a hexadecimal number in x4.
* In: x4 = the hexadecimal to print.
* Clobber: x30, x0 - x3, x5
*/
func asm_print_hex
mov x3, x30
mov x5, #64 /* No of bits to convert to ascii */
1:
sub x5, x5, #4
lsrv x0, x4, x5
and x0, x0, #0xf
cmp x0, #0xA
b.lo 2f
/* Add by 0x27 in addition to ASCII_OFFSET_NUM
* to get ascii for characters 'a - f'.
*/
add x0, x0, #0x27
2:
add x0, x0, #ASCII_OFFSET_NUM
bl plat_crash_console_putc
cbnz x5, 1b
ret x3
endfunc asm_print_hex
/***********************************************************
* The common implementation of do_panic for all BL stages
***********************************************************/
.section .rodata.panic_str, "aS"
panic_msg: .asciz "PANIC at PC : 0x"
/* ---------------------------------------------------------------------------
* do_panic assumes that it is invoked from a C Runtime Environment ie a
* valid stack exists. This call will not return.
* Clobber list : if CRASH_REPORTING is not enabled then x30, x0 - x6
* ---------------------------------------------------------------------------
*/
/* This is for the non el3 BL stages to compile through */
.weak el3_panic
func do_panic
#if CRASH_REPORTING
str x0, [sp, #-0x10]!
mrs x0, currentel
ubfx x0, x0, #2, #2
cmp x0, #0x3
ldr x0, [sp], #0x10
b.eq el3_panic
#endif
panic_common:
/*
* el3_panic will be redefined by the BL31
* crash reporting mechanism (if enabled)
*/
el3_panic:
mov x6, x30
bl plat_crash_console_init
/* Check if the console is initialized */
cbz x0, _panic_handler
/* The console is initialized */
adr x4, panic_msg
bl asm_print_str
mov x4, x6
/* The panic location is lr -4 */
sub x4, x4, #4
bl asm_print_hex
bl plat_crash_console_flush
_panic_handler:
/* Pass to plat_panic_handler the address from where el3_panic was
* called, not the address of the call from el3_panic. */
mov x30, x6
b plat_panic_handler
endfunc do_panic

View File

@@ -0,0 +1,129 @@
/*
* Copyright (c) 2013-2016, ARM Limited and Contributors. All rights reserved.
*
* SPDX-License-Identifier: BSD-3-Clause
*/
#include <asm_macros.S>
#include <bl_common.h>
/* -----------------------------------------------------------------------------
* Very simple stackless exception handlers used by BL2 and BL31 stages.
* BL31 uses them before stacks are setup. BL2 uses them throughout.
* -----------------------------------------------------------------------------
*/
.globl early_exceptions
vector_base early_exceptions
/* -----------------------------------------------------
* Current EL with SP0 : 0x0 - 0x200
* -----------------------------------------------------
*/
vector_entry SynchronousExceptionSP0
mov x0, #SYNC_EXCEPTION_SP_EL0
bl plat_report_exception
no_ret plat_panic_handler
check_vector_size SynchronousExceptionSP0
vector_entry IrqSP0
mov x0, #IRQ_SP_EL0
bl plat_report_exception
no_ret plat_panic_handler
check_vector_size IrqSP0
vector_entry FiqSP0
mov x0, #FIQ_SP_EL0
bl plat_report_exception
no_ret plat_panic_handler
check_vector_size FiqSP0
vector_entry SErrorSP0
mov x0, #SERROR_SP_EL0
bl plat_report_exception
no_ret plat_panic_handler
check_vector_size SErrorSP0
/* -----------------------------------------------------
* Current EL with SPx: 0x200 - 0x400
* -----------------------------------------------------
*/
vector_entry SynchronousExceptionSPx
mov x0, #SYNC_EXCEPTION_SP_ELX
bl plat_report_exception
no_ret plat_panic_handler
check_vector_size SynchronousExceptionSPx
vector_entry IrqSPx
mov x0, #IRQ_SP_ELX
bl plat_report_exception
no_ret plat_panic_handler
check_vector_size IrqSPx
vector_entry FiqSPx
mov x0, #FIQ_SP_ELX
bl plat_report_exception
no_ret plat_panic_handler
check_vector_size FiqSPx
vector_entry SErrorSPx
mov x0, #SERROR_SP_ELX
bl plat_report_exception
no_ret plat_panic_handler
check_vector_size SErrorSPx
/* -----------------------------------------------------
* Lower EL using AArch64 : 0x400 - 0x600
* -----------------------------------------------------
*/
vector_entry SynchronousExceptionA64
mov x0, #SYNC_EXCEPTION_AARCH64
bl plat_report_exception
no_ret plat_panic_handler
check_vector_size SynchronousExceptionA64
vector_entry IrqA64
mov x0, #IRQ_AARCH64
bl plat_report_exception
no_ret plat_panic_handler
check_vector_size IrqA64
vector_entry FiqA64
mov x0, #FIQ_AARCH64
bl plat_report_exception
no_ret plat_panic_handler
check_vector_size FiqA64
vector_entry SErrorA64
mov x0, #SERROR_AARCH64
bl plat_report_exception
no_ret plat_panic_handler
check_vector_size SErrorA64
/* -----------------------------------------------------
* Lower EL using AArch32 : 0x600 - 0x800
* -----------------------------------------------------
*/
vector_entry SynchronousExceptionA32
mov x0, #SYNC_EXCEPTION_AARCH32
bl plat_report_exception
no_ret plat_panic_handler
check_vector_size SynchronousExceptionA32
vector_entry IrqA32
mov x0, #IRQ_AARCH32
bl plat_report_exception
no_ret plat_panic_handler
check_vector_size IrqA32
vector_entry FiqA32
mov x0, #FIQ_AARCH32
bl plat_report_exception
no_ret plat_panic_handler
check_vector_size FiqA32
vector_entry SErrorA32
mov x0, #SERROR_AARCH32
bl plat_report_exception
no_ret plat_panic_handler
check_vector_size SErrorA32

View File

@@ -0,0 +1,620 @@
/*
* Copyright (c) 2013-2018, ARM Limited and Contributors. All rights reserved.
*
* SPDX-License-Identifier: BSD-3-Clause
*/
#include <arch.h>
#include <arch_helpers.h>
#include <assert.h>
#include <auth_mod.h>
#include <bl_common.h>
#include <debug.h>
#include <errno.h>
#include <io_storage.h>
#include <platform.h>
#include <string.h>
#include <utils.h>
#include <xlat_tables_defs.h>
#if TRUSTED_BOARD_BOOT
# ifdef DYN_DISABLE_AUTH
static int disable_auth;
/******************************************************************************
* API to dynamically disable authentication. Only meant for development
* systems. This is only invoked if DYN_DISABLE_AUTH is defined. This
* capability is restricted to LOAD_IMAGE_V2.
*****************************************************************************/
void dyn_disable_auth(void)
{
INFO("Disabling authentication of images dynamically\n");
disable_auth = 1;
}
# endif /* DYN_DISABLE_AUTH */
/******************************************************************************
* Function to determine whether the authentication is disabled dynamically.
*****************************************************************************/
static int dyn_is_auth_disabled(void)
{
# ifdef DYN_DISABLE_AUTH
return disable_auth;
# else
return 0;
# endif
}
#endif /* TRUSTED_BOARD_BOOT */
uintptr_t page_align(uintptr_t value, unsigned dir)
{
/* Round up the limit to the next page boundary */
if (value & (PAGE_SIZE - 1)) {
value &= ~(PAGE_SIZE - 1);
if (dir == UP)
value += PAGE_SIZE;
}
return value;
}
/******************************************************************************
* Determine whether the memory region delimited by 'addr' and 'size' is free,
* given the extents of free memory.
* Return 1 if it is free, 0 if it is not free or if the input values are
* invalid.
*****************************************************************************/
int is_mem_free(uintptr_t free_base, size_t free_size,
uintptr_t addr, size_t size)
{
uintptr_t free_end, requested_end;
/*
* Handle corner cases first.
*
* The order of the 2 tests is important, because if there's no space
* left (i.e. free_size == 0) but we don't ask for any memory
* (i.e. size == 0) then we should report that the memory is free.
*/
if (size == 0)
return 1; /* A zero-byte region is always free */
if (free_size == 0)
return 0;
/*
* Check that the end addresses don't overflow.
* If they do, consider that this memory region is not free, as this
* is an invalid scenario.
*/
if (check_uptr_overflow(free_base, free_size - 1))
return 0;
free_end = free_base + (free_size - 1);
if (check_uptr_overflow(addr, size - 1))
return 0;
requested_end = addr + (size - 1);
/*
* Finally, check that the requested memory region lies within the free
* region.
*/
return (addr >= free_base) && (requested_end <= free_end);
}
#if !LOAD_IMAGE_V2
/******************************************************************************
* Inside a given memory region, determine whether a sub-region of memory is
* closer from the top or the bottom of the encompassing region. Return the
* size of the smallest chunk of free memory surrounding the sub-region in
* 'small_chunk_size'.
*****************************************************************************/
static unsigned int choose_mem_pos(uintptr_t mem_start, uintptr_t mem_end,
uintptr_t submem_start, uintptr_t submem_end,
size_t *small_chunk_size)
{
size_t top_chunk_size, bottom_chunk_size;
assert(mem_start <= submem_start);
assert(submem_start <= submem_end);
assert(submem_end <= mem_end);
assert(small_chunk_size != NULL);
top_chunk_size = mem_end - submem_end;
bottom_chunk_size = submem_start - mem_start;
if (top_chunk_size < bottom_chunk_size) {
*small_chunk_size = top_chunk_size;
return TOP;
} else {
*small_chunk_size = bottom_chunk_size;
return BOTTOM;
}
}
/******************************************************************************
* Reserve the memory region delimited by 'addr' and 'size'. The extents of free
* memory are passed in 'free_base' and 'free_size' and they will be updated to
* reflect the memory usage.
* The caller must ensure the memory to reserve is free and that the addresses
* and sizes passed in arguments are sane.
*****************************************************************************/
void reserve_mem(uintptr_t *free_base, size_t *free_size,
uintptr_t addr, size_t size)
{
size_t discard_size;
size_t reserved_size;
unsigned int pos;
assert(free_base != NULL);
assert(free_size != NULL);
assert(is_mem_free(*free_base, *free_size, addr, size));
if (size == 0) {
WARN("Nothing to allocate, requested size is zero\n");
return;
}
pos = choose_mem_pos(*free_base, *free_base + (*free_size - 1),
addr, addr + (size - 1),
&discard_size);
reserved_size = size + discard_size;
*free_size -= reserved_size;
if (pos == BOTTOM)
*free_base = addr + size;
VERBOSE("Reserved 0x%zx bytes (discarded 0x%zx bytes %s)\n",
reserved_size, discard_size,
pos == TOP ? "above" : "below");
}
static void dump_load_info(uintptr_t image_load_addr,
size_t image_size,
const meminfo_t *mem_layout)
{
INFO("Trying to load image at address %p, size = 0x%zx\n",
(void *)image_load_addr, image_size);
INFO("Current memory layout:\n");
INFO(" total region = [base = %p, size = 0x%zx]\n",
(void *) mem_layout->total_base, mem_layout->total_size);
INFO(" free region = [base = %p, size = 0x%zx]\n",
(void *) mem_layout->free_base, mem_layout->free_size);
}
#endif /* LOAD_IMAGE_V2 */
/* Generic function to return the size of an image */
size_t get_image_size(unsigned int image_id)
{
uintptr_t dev_handle;
uintptr_t image_handle;
uintptr_t image_spec;
size_t image_size = 0;
int io_result;
/* Obtain a reference to the image by querying the platform layer */
io_result = plat_get_image_source(image_id, &dev_handle, &image_spec);
if (io_result != 0) {
WARN("Failed to obtain reference to image id=%u (%i)\n",
image_id, io_result);
return 0;
}
/* Attempt to access the image */
io_result = io_open(dev_handle, image_spec, &image_handle);
if (io_result != 0) {
WARN("Failed to access image id=%u (%i)\n",
image_id, io_result);
return 0;
}
/* Find the size of the image */
io_result = io_size(image_handle, &image_size);
if ((io_result != 0) || (image_size == 0)) {
WARN("Failed to determine the size of the image id=%u (%i)\n",
image_id, io_result);
}
io_result = io_close(image_handle);
/* Ignore improbable/unrecoverable error in 'close' */
/* TODO: Consider maintaining open device connection from this
* bootloader stage
*/
io_result = io_dev_close(dev_handle);
/* Ignore improbable/unrecoverable error in 'dev_close' */
return image_size;
}
#if LOAD_IMAGE_V2
/*******************************************************************************
* Internal function to load an image at a specific address given
* an image ID and extents of free memory.
*
* If the load is successful then the image information is updated.
*
* Returns 0 on success, a negative error code otherwise.
******************************************************************************/
static int load_image(unsigned int image_id, image_info_t *image_data)
{
uintptr_t dev_handle;
uintptr_t image_handle;
uintptr_t image_spec;
uintptr_t image_base;
size_t image_size;
size_t bytes_read;
int io_result;
assert(image_data != NULL);
assert(image_data->h.version >= VERSION_2);
image_base = image_data->image_base;
/* Obtain a reference to the image by querying the platform layer */
io_result = plat_get_image_source(image_id, &dev_handle, &image_spec);
if (io_result != 0) {
WARN("Failed to obtain reference to image id=%u (%i)\n",
image_id, io_result);
return io_result;
}
/* Attempt to access the image */
io_result = io_open(dev_handle, image_spec, &image_handle);
if (io_result != 0) {
WARN("Failed to access image id=%u (%i)\n",
image_id, io_result);
return io_result;
}
INFO("Loading image id=%u at address %p\n", image_id,
(void *) image_base);
/* Find the size of the image */
io_result = io_size(image_handle, &image_size);
if ((io_result != 0) || (image_size == 0)) {
WARN("Failed to determine the size of the image id=%u (%i)\n",
image_id, io_result);
goto exit;
}
/* Check that the image size to load is within limit */
if (image_size > image_data->image_max_size) {
WARN("Image id=%u size out of bounds\n", image_id);
io_result = -EFBIG;
goto exit;
}
image_data->image_size = image_size;
/* We have enough space so load the image now */
/* TODO: Consider whether to try to recover/retry a partially successful read */
io_result = io_read(image_handle, image_base, image_size, &bytes_read);
if ((io_result != 0) || (bytes_read < image_size)) {
WARN("Failed to load image id=%u (%i)\n", image_id, io_result);
goto exit;
}
INFO("Image id=%u loaded: %p - %p\n", image_id, (void *) image_base,
(void *) (image_base + image_size));
exit:
io_close(image_handle);
/* Ignore improbable/unrecoverable error in 'close' */
/* TODO: Consider maintaining open device connection from this bootloader stage */
io_dev_close(dev_handle);
/* Ignore improbable/unrecoverable error in 'dev_close' */
return io_result;
}
static int load_auth_image_internal(unsigned int image_id,
image_info_t *image_data,
int is_parent_image)
{
int rc;
#if TRUSTED_BOARD_BOOT
if (dyn_is_auth_disabled() == 0) {
unsigned int parent_id;
/* Use recursion to authenticate parent images */
rc = auth_mod_get_parent_id(image_id, &parent_id);
if (rc == 0) {
rc = load_auth_image_internal(parent_id, image_data, 1);
if (rc != 0) {
return rc;
}
}
}
#endif /* TRUSTED_BOARD_BOOT */
/* Load the image */
rc = load_image(image_id, image_data);
if (rc != 0) {
return rc;
}
#if TRUSTED_BOARD_BOOT
if (dyn_is_auth_disabled() == 0) {
/* Authenticate it */
rc = auth_mod_verify_img(image_id,
(void *)image_data->image_base,
image_data->image_size);
if (rc != 0) {
/* Authentication error, zero memory and flush it right away. */
zero_normalmem((void *)image_data->image_base,
image_data->image_size);
flush_dcache_range(image_data->image_base,
image_data->image_size);
return -EAUTH;
}
}
#endif /* TRUSTED_BOARD_BOOT */
/*
* Flush the image to main memory so that it can be executed later by
* any CPU, regardless of cache and MMU state. If TBB is enabled, then
* the file has been successfully loaded and authenticated and flush
* only for child images, not for the parents (certificates).
*/
if (!is_parent_image) {
flush_dcache_range(image_data->image_base,
image_data->image_size);
}
return 0;
}
/*******************************************************************************
* Generic function to load and authenticate an image. The image is actually
* loaded by calling the 'load_image()' function. Therefore, it returns the
* same error codes if the loading operation failed, or -EAUTH if the
* authentication failed. In addition, this function uses recursion to
* authenticate the parent images up to the root of trust.
******************************************************************************/
int load_auth_image(unsigned int image_id, image_info_t *image_data)
{
int err;
do {
err = load_auth_image_internal(image_id, image_data, 0);
} while (err != 0 && plat_try_next_boot_source());
return err;
}
#else /* LOAD_IMAGE_V2 */
/*******************************************************************************
* Generic function to load an image at a specific address given an image ID and
* extents of free memory.
*
* If the load is successful then the image information is updated.
*
* If the entry_point_info argument is not NULL then this function also updates:
* - the memory layout to mark the memory as reserved;
* - the entry point information.
*
* The caller might pass a NULL pointer for the entry point if they are not
* interested in this information. This is typically the case for non-executable
* images (e.g. certificates) and executable images that won't ever be executed
* on the application processor (e.g. additional microcontroller firmware).
*
* Returns 0 on success, a negative error code otherwise.
******************************************************************************/
int load_image(meminfo_t *mem_layout,
unsigned int image_id,
uintptr_t image_base,
image_info_t *image_data,
entry_point_info_t *entry_point_info)
{
uintptr_t dev_handle;
uintptr_t image_handle;
uintptr_t image_spec;
size_t image_size;
size_t bytes_read;
int io_result;
assert(mem_layout != NULL);
assert(image_data != NULL);
assert(image_data->h.version == VERSION_1);
/* Obtain a reference to the image by querying the platform layer */
io_result = plat_get_image_source(image_id, &dev_handle, &image_spec);
if (io_result != 0) {
WARN("Failed to obtain reference to image id=%u (%i)\n",
image_id, io_result);
return io_result;
}
/* Attempt to access the image */
io_result = io_open(dev_handle, image_spec, &image_handle);
if (io_result != 0) {
WARN("Failed to access image id=%u (%i)\n",
image_id, io_result);
return io_result;
}
INFO("Loading image id=%u at address %p\n", image_id,
(void *) image_base);
/* Find the size of the image */
io_result = io_size(image_handle, &image_size);
if ((io_result != 0) || (image_size == 0)) {
WARN("Failed to determine the size of the image id=%u (%i)\n",
image_id, io_result);
goto exit;
}
/* Check that the memory where the image will be loaded is free */
if (!is_mem_free(mem_layout->free_base, mem_layout->free_size,
image_base, image_size)) {
WARN("Failed to reserve region [base = %p, size = 0x%zx]\n",
(void *) image_base, image_size);
dump_load_info(image_base, image_size, mem_layout);
io_result = -ENOMEM;
goto exit;
}
/* We have enough space so load the image now */
/* TODO: Consider whether to try to recover/retry a partially successful read */
io_result = io_read(image_handle, image_base, image_size, &bytes_read);
if ((io_result != 0) || (bytes_read < image_size)) {
WARN("Failed to load image id=%u (%i)\n", image_id, io_result);
goto exit;
}
image_data->image_base = image_base;
image_data->image_size = image_size;
/*
* Update the memory usage info.
* This is done after the actual loading so that it is not updated when
* the load is unsuccessful.
* If the caller does not provide an entry point, bypass the memory
* reservation.
*/
if (entry_point_info != NULL) {
reserve_mem(&mem_layout->free_base, &mem_layout->free_size,
image_base, image_size);
entry_point_info->pc = image_base;
} else {
INFO("Skip reserving region [base = %p, size = 0x%zx]\n",
(void *) image_base, image_size);
}
#if !TRUSTED_BOARD_BOOT
/*
* File has been successfully loaded.
* Flush the image to main memory so that it can be executed later by
* any CPU, regardless of cache and MMU state.
* When TBB is enabled the image is flushed later, after image
* authentication.
*/
flush_dcache_range(image_base, image_size);
#endif /* TRUSTED_BOARD_BOOT */
INFO("Image id=%u loaded at address %p, size = 0x%zx\n", image_id,
(void *) image_base, image_size);
exit:
io_close(image_handle);
/* Ignore improbable/unrecoverable error in 'close' */
/* TODO: Consider maintaining open device connection from this bootloader stage */
io_dev_close(dev_handle);
/* Ignore improbable/unrecoverable error in 'dev_close' */
return io_result;
}
static int load_auth_image_internal(meminfo_t *mem_layout,
unsigned int image_id,
uintptr_t image_base,
image_info_t *image_data,
entry_point_info_t *entry_point_info,
int is_parent_image)
{
int rc;
#if TRUSTED_BOARD_BOOT
unsigned int parent_id;
/* Use recursion to authenticate parent images */
rc = auth_mod_get_parent_id(image_id, &parent_id);
if (rc == 0) {
rc = load_auth_image_internal(mem_layout, parent_id, image_base,
image_data, NULL, 1);
if (rc != 0) {
return rc;
}
}
#endif /* TRUSTED_BOARD_BOOT */
/* Load the image */
rc = load_image(mem_layout, image_id, image_base, image_data,
entry_point_info);
if (rc != 0) {
return rc;
}
#if TRUSTED_BOARD_BOOT
/* Authenticate it */
rc = auth_mod_verify_img(image_id,
(void *)image_data->image_base,
image_data->image_size);
if (rc != 0) {
/* Authentication error, zero memory and flush it right away. */
zero_normalmem((void *)image_data->image_base,
image_data->image_size);
flush_dcache_range(image_data->image_base,
image_data->image_size);
return -EAUTH;
}
/*
* File has been successfully loaded and authenticated.
* Flush the image to main memory so that it can be executed later by
* any CPU, regardless of cache and MMU state.
* Do it only for child images, not for the parents (certificates).
*/
if (!is_parent_image) {
flush_dcache_range(image_data->image_base,
image_data->image_size);
}
#endif /* TRUSTED_BOARD_BOOT */
return 0;
}
/*******************************************************************************
* Generic function to load and authenticate an image. The image is actually
* loaded by calling the 'load_image()' function. Therefore, it returns the
* same error codes if the loading operation failed, or -EAUTH if the
* authentication failed. In addition, this function uses recursion to
* authenticate the parent images up to the root of trust.
******************************************************************************/
int load_auth_image(meminfo_t *mem_layout,
unsigned int image_id,
uintptr_t image_base,
image_info_t *image_data,
entry_point_info_t *entry_point_info)
{
int err;
do {
err = load_auth_image_internal(mem_layout, image_id, image_base,
image_data, entry_point_info, 0);
} while (err != 0 && plat_try_next_boot_source());
return err;
}
#endif /* LOAD_IMAGE_V2 */
/*******************************************************************************
* Print the content of an entry_point_info_t structure.
******************************************************************************/
void print_entry_point_info(const entry_point_info_t *ep_info)
{
INFO("Entry point address = %p\n", (void *)ep_info->pc);
INFO("SPSR = 0x%x\n", ep_info->spsr);
#define PRINT_IMAGE_ARG(n) \
VERBOSE("Argument #" #n " = 0x%llx\n", \
(unsigned long long) ep_info->args.arg##n)
PRINT_IMAGE_ARG(0);
PRINT_IMAGE_ARG(1);
PRINT_IMAGE_ARG(2);
PRINT_IMAGE_ARG(3);
#ifndef AARCH32
PRINT_IMAGE_ARG(4);
PRINT_IMAGE_ARG(5);
PRINT_IMAGE_ARG(6);
PRINT_IMAGE_ARG(7);
#endif
#undef PRINT_IMAGE_ARG
}

View File

@@ -0,0 +1,261 @@
/*
* Copyright (c) 2016-2018, ARM Limited and Contributors. All rights reserved.
*
* SPDX-License-Identifier: BSD-3-Clause
*/
#include <arch_helpers.h>
#include <assert.h>
#include <bl_common.h>
#include <desc_image_load.h>
static bl_load_info_t bl_load_info;
static bl_params_t next_bl_params;
/*******************************************************************************
* This function flushes the data structures so that they are visible
* in memory for the next BL image.
******************************************************************************/
void flush_bl_params_desc(void)
{
flush_dcache_range((uintptr_t)bl_mem_params_desc_ptr,
sizeof(*bl_mem_params_desc_ptr) * bl_mem_params_desc_num);
flush_dcache_range((uintptr_t)&next_bl_params,
sizeof(next_bl_params));
}
/*******************************************************************************
* This function returns the index for given image_id, within the
* image descriptor array provided by bl_image_info_descs_ptr, if the
* image is found else it returns -1.
******************************************************************************/
int get_bl_params_node_index(unsigned int image_id)
{
int index;
assert(image_id != INVALID_IMAGE_ID);
for (index = 0; index < bl_mem_params_desc_num; index++) {
if (bl_mem_params_desc_ptr[index].image_id == image_id)
return index;
}
return -1;
}
/*******************************************************************************
* This function returns the pointer to `bl_mem_params_node_t` object for
* given image_id, within the image descriptor array provided by
* bl_mem_params_desc_ptr, if the image is found else it returns NULL.
******************************************************************************/
bl_mem_params_node_t *get_bl_mem_params_node(unsigned int image_id)
{
int index;
assert(image_id != INVALID_IMAGE_ID);
index = get_bl_params_node_index(image_id);
if (index >= 0)
return &bl_mem_params_desc_ptr[index];
else
return NULL;
}
/*******************************************************************************
* This function creates the list of loadable images, by populating and
* linking each `bl_load_info_node_t` type node, using the internal array
* of image descriptor provided by bl_mem_params_desc_ptr. It also populates
* and returns `bl_load_info_t` type structure that contains head of the list
* of loadable images.
******************************************************************************/
bl_load_info_t *get_bl_load_info_from_mem_params_desc(void)
{
int index = 0;
/* If there is no image to start with, return NULL */
if (!bl_mem_params_desc_num)
return NULL;
/* Assign initial data structures */
bl_load_info_node_t *bl_node_info =
&bl_mem_params_desc_ptr[index].load_node_mem;
bl_load_info.head = bl_node_info;
SET_PARAM_HEAD(&bl_load_info, PARAM_BL_LOAD_INFO, VERSION_2, 0);
/* Go through the image descriptor array and create the list */
for (; index < bl_mem_params_desc_num; index++) {
/* Populate the image information */
bl_node_info->image_id = bl_mem_params_desc_ptr[index].image_id;
bl_node_info->image_info = &bl_mem_params_desc_ptr[index].image_info;
/* Link next image if present */
if ((index + 1) < bl_mem_params_desc_num) {
/* Get the memory and link the next node */
bl_node_info->next_load_info =
&bl_mem_params_desc_ptr[index + 1].load_node_mem;
bl_node_info = bl_node_info->next_load_info;
}
}
return &bl_load_info;
}
/*******************************************************************************
* This function creates the list of executable images, by populating and
* linking each `bl_params_node_t` type node, using the internal array of
* image descriptor provided by bl_mem_params_desc_ptr. It also populates
* and returns `bl_params_t` type structure that contains head of the list
* of executable images.
******************************************************************************/
bl_params_t *get_next_bl_params_from_mem_params_desc(void)
{
int count;
unsigned int img_id = 0;
int link_index = 0;
bl_params_node_t *bl_current_exec_node = NULL;
bl_params_node_t *bl_last_exec_node = NULL;
bl_mem_params_node_t *desc_ptr;
/* If there is no image to start with, return NULL */
if (!bl_mem_params_desc_num)
return NULL;
/* Get the list HEAD */
for (count = 0; count < bl_mem_params_desc_num; count++) {
desc_ptr = &bl_mem_params_desc_ptr[count];
if ((EP_GET_EXE(desc_ptr->ep_info.h.attr) == EXECUTABLE) &&
(EP_GET_FIRST_EXE(desc_ptr->ep_info.h.attr) == EP_FIRST_EXE)) {
next_bl_params.head = &desc_ptr->params_node_mem;
link_index = count;
break;
}
}
/* Make sure we have a HEAD node */
assert(next_bl_params.head != NULL);
/* Populate the HEAD information */
SET_PARAM_HEAD(&next_bl_params, PARAM_BL_PARAMS, VERSION_2, 0);
/*
* Go through the image descriptor array and create the list.
* This bounded loop is to make sure that we are not looping forever.
*/
for (count = 0 ; count < bl_mem_params_desc_num; count++) {
desc_ptr = &bl_mem_params_desc_ptr[link_index];
/* Make sure the image is executable */
assert(EP_GET_EXE(desc_ptr->ep_info.h.attr) == EXECUTABLE);
/* Get the memory for current node */
bl_current_exec_node = &desc_ptr->params_node_mem;
/* Populate the image information */
bl_current_exec_node->image_id = desc_ptr->image_id;
bl_current_exec_node->image_info = &desc_ptr->image_info;
bl_current_exec_node->ep_info = &desc_ptr->ep_info;
if (bl_last_exec_node) {
/* Assert if loop detected */
assert(bl_last_exec_node->next_params_info == NULL);
/* Link the previous node to the current one */
bl_last_exec_node->next_params_info = bl_current_exec_node;
}
/* Update the last node */
bl_last_exec_node = bl_current_exec_node;
/* If no next hand-off image then break out */
img_id = desc_ptr->next_handoff_image_id;
if (img_id == INVALID_IMAGE_ID)
break;
/* Get the index for the next hand-off image */
link_index = get_bl_params_node_index(img_id);
assert((link_index > 0) &&
(link_index < bl_mem_params_desc_num));
}
/* Invalid image is expected to terminate the loop */
assert(img_id == INVALID_IMAGE_ID);
return &next_bl_params;
}
/*******************************************************************************
* This function populates the entry point information with the corresponding
* config file for all executable BL images described in bl_params.
******************************************************************************/
void populate_next_bl_params_config(bl_params_t *bl2_to_next_bl_params)
{
bl_params_node_t *params_node;
unsigned int fw_config_id;
uintptr_t hw_config_base = 0, fw_config_base;
bl_mem_params_node_t *mem_params;
assert(bl2_to_next_bl_params != NULL);
/*
* Get the `bl_mem_params_node_t` corresponding to HW_CONFIG
* if available.
*/
mem_params = get_bl_mem_params_node(HW_CONFIG_ID);
if (mem_params != NULL)
hw_config_base = mem_params->image_info.image_base;
for (params_node = bl2_to_next_bl_params->head; params_node != NULL;
params_node = params_node->next_params_info) {
fw_config_base = 0;
switch (params_node->image_id) {
case BL31_IMAGE_ID:
fw_config_id = SOC_FW_CONFIG_ID;
break;
case BL32_IMAGE_ID:
fw_config_id = TOS_FW_CONFIG_ID;
break;
case BL33_IMAGE_ID:
fw_config_id = NT_FW_CONFIG_ID;
break;
default:
fw_config_id = INVALID_IMAGE_ID;
break;
}
if (fw_config_id != INVALID_IMAGE_ID) {
mem_params = get_bl_mem_params_node(fw_config_id);
if (mem_params != NULL)
fw_config_base = mem_params->image_info.image_base;
}
/*
* Pass hw and tb_fw config addresses to next images. NOTE - for
* EL3 runtime images (BL31 for AArch64 and BL32 for AArch32),
* arg0 is already used by generic code. Take care of not
* overwriting the previous initialisations.
*/
if (params_node == bl2_to_next_bl_params->head) {
if (params_node->ep_info->args.arg1 == 0)
params_node->ep_info->args.arg1 =
fw_config_base;
if (params_node->ep_info->args.arg2 == 0)
params_node->ep_info->args.arg2 =
hw_config_base;
} else {
if (params_node->ep_info->args.arg0 == 0)
params_node->ep_info->args.arg0 =
fw_config_base;
if (params_node->ep_info->args.arg1 == 0)
params_node->ep_info->args.arg1 =
hw_config_base;
}
}
}

View File

@@ -0,0 +1,96 @@
/*
* Copyright (c) 2018, ARM Limited and Contributors. All rights reserved.
*
* SPDX-License-Identifier: BSD-3-Clause
*/
/* Helper functions to offer easier navigation of Device Tree Blob */
#include <assert.h>
#include <debug.h>
#include <fdt_wrappers.h>
#include <libfdt.h>
/*
* Read cells from a given property of the given node. At most 2 cells of the
* property are read, and pointer is updated. Returns 0 on success, and -1 upon
* error
*/
int fdtw_read_cells(const void *dtb, int node, const char *prop,
unsigned int cells, void *value)
{
const uint32_t *value_ptr;
uint32_t hi = 0, lo;
int value_len;
assert(dtb != NULL);
assert(prop != NULL);
assert(value != NULL);
assert(node >= 0);
/* We expect either 1 or 2 cell property */
assert(cells <= 2U);
/* Access property and obtain its length (in bytes) */
value_ptr = fdt_getprop_namelen(dtb, node, prop, (int)strlen(prop),
&value_len);
if (value_ptr == NULL) {
WARN("Couldn't find property %s in dtb\n", prop);
return -1;
}
/* Verify that property length accords with cell length */
if (NCELLS((unsigned int)value_len) != cells) {
WARN("Property length mismatch\n");
return -1;
}
if (cells == 2U) {
hi = fdt32_to_cpu(*value_ptr);
value_ptr++;
}
lo = fdt32_to_cpu(*value_ptr);
if (cells == 2U)
*((uint64_t *) value) = ((uint64_t) hi << 32) | lo;
else
*((uint32_t *) value) = lo;
return 0;
}
/*
* Write cells in place to a given property of the given node. At most 2 cells
* of the property are written. Returns 0 on success, and -1 upon error.
*/
int fdtw_write_inplace_cells(void *dtb, int node, const char *prop,
unsigned int cells, void *value)
{
int err, len;
assert(dtb != NULL);
assert(prop != NULL);
assert(value != NULL);
assert(node >= 0);
/* We expect either 1 or 2 cell property */
assert(cells <= 2U);
if (cells == 2U)
*(uint64_t *)value = cpu_to_fdt64(*(uint64_t *)value);
else
*(uint32_t *)value = cpu_to_fdt32(*(uint32_t *)value);
len = (int)cells * 4;
/* Set property value in place */
err = fdt_setprop_inplace(dtb, node, prop, value, len);
if (err != 0) {
WARN("Modify property %s failed with error %d\n", prop, err);
return -1;
}
return 0;
}

View File

@@ -0,0 +1,79 @@
/*
* Copyright (c) 2018, ARM Limited and Contributors. All rights reserved.
*
* SPDX-License-Identifier: BSD-3-Clause
*/
#include <arch_helpers.h>
#include <assert.h>
#include <bl_common.h>
#include <debug.h>
#include <image_decompress.h>
#include <stdint.h>
static uintptr_t decompressor_buf_base;
static uint32_t decompressor_buf_size;
static decompressor_t *decompressor;
static struct image_info saved_image_info;
void image_decompress_init(uintptr_t buf_base, uint32_t buf_size,
decompressor_t *_decompressor)
{
decompressor_buf_base = buf_base;
decompressor_buf_size = buf_size;
decompressor = _decompressor;
}
void image_decompress_prepare(struct image_info *info)
{
/*
* If the image is compressed, it should be loaded into the temporary
* buffer instead of its final destination. We save image_info, then
* override ->image_base and ->image_max_size so that load_image() will
* transfer the compressed data to the temporary buffer.
*/
saved_image_info = *info;
info->image_base = decompressor_buf_base;
info->image_max_size = decompressor_buf_size;
}
int image_decompress(struct image_info *info)
{
uintptr_t compressed_image_base, image_base, work_base;
uint32_t compressed_image_size, work_size;
int ret;
/*
* The size of compressed data has been filled by load_image().
* Read it out before restoring image_info.
*/
compressed_image_size = info->image_size;
compressed_image_base = info->image_base;
*info = saved_image_info;
assert(compressed_image_size <= decompressor_buf_size);
image_base = info->image_base;
/*
* Use the rest of the temporary buffer as workspace of the
* decompressor since the decompressor may need additional memory.
*/
work_base = compressed_image_base + compressed_image_size;
work_size = decompressor_buf_size - compressed_image_size;
ret = decompressor(&compressed_image_base, compressed_image_size,
&image_base, info->image_max_size,
work_base, work_size);
if (ret) {
ERROR("Failed to decompress image (err=%d)\n", ret);
return ret;
}
/* image_base is updated to the final pos when decompressor() exits. */
info->image_size = image_base - info->image_base;
flush_dcache_range(info->image_base, info->image_size);
return 0;
}

View File

@@ -0,0 +1,167 @@
/*
* Copyright (c) 2013-2018, ARM Limited and Contributors. All rights reserved.
*
* SPDX-License-Identifier: BSD-3-Clause
*/
#include <assert.h>
#include <debug.h>
#include <errno.h>
#include <runtime_svc.h>
#include <string.h>
/*******************************************************************************
* The 'rt_svc_descs' array holds the runtime service descriptors exported by
* services by placing them in the 'rt_svc_descs' linker section.
* The 'rt_svc_descs_indices' array holds the index of a descriptor in the
* 'rt_svc_descs' array. When an SMC arrives, the OEN[29:24] bits and the call
* type[31] bit in the function id are combined to get an index into the
* 'rt_svc_descs_indices' array. This gives the index of the descriptor in the
* 'rt_svc_descs' array which contains the SMC handler.
******************************************************************************/
uint8_t rt_svc_descs_indices[MAX_RT_SVCS];
static rt_svc_desc_t *rt_svc_descs;
#define RT_SVC_DECS_NUM ((RT_SVC_DESCS_END - RT_SVC_DESCS_START)\
/ sizeof(rt_svc_desc_t))
/*******************************************************************************
* Function to invoke the registered `handle` corresponding to the smc_fid in
* AArch32 mode.
******************************************************************************/
#if SMCCC_MAJOR_VERSION == 1
uintptr_t handle_runtime_svc(uint32_t smc_fid,
void *cookie,
void *handle,
unsigned int flags)
{
u_register_t x1, x2, x3, x4;
int index;
unsigned int idx;
const rt_svc_desc_t *rt_svc_descs;
assert(handle);
idx = get_unique_oen_from_smc_fid(smc_fid);
assert(idx < MAX_RT_SVCS);
index = rt_svc_descs_indices[idx];
if (index < 0 || index >= (int)RT_SVC_DECS_NUM)
SMC_RET1(handle, SMC_UNK);
rt_svc_descs = (rt_svc_desc_t *) RT_SVC_DESCS_START;
get_smc_params_from_ctx(handle, x1, x2, x3, x4);
return rt_svc_descs[index].handle(smc_fid, x1, x2, x3, x4, cookie,
handle, flags);
}
#endif /* SMCCC_MAJOR_VERSION */
/*******************************************************************************
* Simple routine to sanity check a runtime service descriptor before using it
******************************************************************************/
static int32_t validate_rt_svc_desc(const rt_svc_desc_t *desc)
{
if (desc == NULL)
return -EINVAL;
if (desc->start_oen > desc->end_oen)
return -EINVAL;
if (desc->end_oen >= OEN_LIMIT)
return -EINVAL;
#if SMCCC_MAJOR_VERSION == 1
if ((desc->call_type != SMC_TYPE_FAST) &&
(desc->call_type != SMC_TYPE_YIELD))
return -EINVAL;
#elif SMCCC_MAJOR_VERSION == 2
if (desc->is_vendor > 1U)
return -EINVAL;
#endif /* SMCCC_MAJOR_VERSION */
/* A runtime service having no init or handle function doesn't make sense */
if ((desc->init == NULL) && (desc->handle == NULL))
return -EINVAL;
return 0;
}
/*******************************************************************************
* This function calls the initialisation routine in the descriptor exported by
* a runtime service. Once a descriptor has been validated, its start & end
* owning entity numbers and the call type are combined to form a unique oen.
* The unique oen is used as an index into the 'rt_svc_descs_indices' array.
* The index of the runtime service descriptor is stored at this index.
******************************************************************************/
void runtime_svc_init(void)
{
int rc = 0;
unsigned int index, start_idx, end_idx;
/* Assert the number of descriptors detected are less than maximum indices */
assert((RT_SVC_DESCS_END >= RT_SVC_DESCS_START) &&
(RT_SVC_DECS_NUM < MAX_RT_SVCS));
/* If no runtime services are implemented then simply bail out */
if (RT_SVC_DECS_NUM == 0U)
return;
/* Initialise internal variables to invalid state */
memset(rt_svc_descs_indices, -1, sizeof(rt_svc_descs_indices));
rt_svc_descs = (rt_svc_desc_t *) RT_SVC_DESCS_START;
for (index = 0; index < RT_SVC_DECS_NUM; index++) {
rt_svc_desc_t *service = &rt_svc_descs[index];
/*
* An invalid descriptor is an error condition since it is
* difficult to predict the system behaviour in the absence
* of this service.
*/
rc = validate_rt_svc_desc(service);
if (rc) {
ERROR("Invalid runtime service descriptor %p\n",
(void *) service);
panic();
}
/*
* The runtime service may have separate rt_svc_desc_t
* for its fast smc and yielding smc. Since the service itself
* need to be initialized only once, only one of them will have
* an initialisation routine defined. Call the initialisation
* routine for this runtime service, if it is defined.
*/
if (service->init) {
rc = service->init();
if (rc) {
ERROR("Error initializing runtime service %s\n",
service->name);
continue;
}
}
/*
* Fill the indices corresponding to the start and end
* owning entity numbers with the index of the
* descriptor which will handle the SMCs for this owning
* entity range.
*/
#if SMCCC_MAJOR_VERSION == 1
start_idx = get_unique_oen(service->start_oen,
service->call_type);
end_idx = get_unique_oen(service->end_oen,
service->call_type);
#elif SMCCC_MAJOR_VERSION == 2
start_idx = get_rt_desc_idx(service->start_oen,
service->is_vendor);
end_idx = get_rt_desc_idx(service->end_oen,
service->is_vendor);
#endif
assert(start_idx <= end_idx);
assert(end_idx < MAX_RT_SVCS);
for (; start_idx <= end_idx; start_idx++)
rt_svc_descs_indices[start_idx] = index;
}
}

View File

@@ -0,0 +1,61 @@
/*
* Copyright (c) 2017, ARM Limited and Contributors. All rights reserved.
*
* SPDX-License-Identifier: BSD-3-Clause
*/
#include <assert.h>
#include <debug.h>
#include <platform.h>
/* Set the default maximum log level to the `LOG_LEVEL` build flag */
static unsigned int max_log_level = LOG_LEVEL;
/*
* The common log function which is invoked by ARM Trusted Firmware code.
* This function should not be directly invoked and is meant to be
* only used by the log macros defined in debug.h. The function
* expects the first character in the format string to be one of the
* LOG_MARKER_* macros defined in debug.h.
*/
void tf_log(const char *fmt, ...)
{
unsigned int log_level;
va_list args;
const char *prefix_str;
/* We expect the LOG_MARKER_* macro as the first character */
log_level = fmt[0];
/* Verify that log_level is one of LOG_MARKER_* macro defined in debug.h */
assert(log_level && log_level <= LOG_LEVEL_VERBOSE);
assert(log_level % 10 == 0);
if (log_level > max_log_level)
return;
prefix_str = plat_log_get_prefix(log_level);
if (prefix_str != NULL)
tf_string_print(prefix_str);
va_start(args, fmt);
tf_vprintf(fmt+1, args);
va_end(args);
}
/*
* The helper function to set the log level dynamically by platform. The
* maximum log level is determined by `LOG_LEVEL` build flag at compile time
* and this helper can set a lower log level than the one at compile.
*/
void tf_log_set_max_level(unsigned int log_level)
{
assert(log_level <= LOG_LEVEL_VERBOSE);
assert((log_level % 10) == 0);
/* Cap log_level to the compile time maximum. */
if (log_level < LOG_LEVEL)
max_log_level = log_level;
}

View File

@@ -0,0 +1,171 @@
/*
* Copyright (c) 2014-2017, ARM Limited and Contributors. All rights reserved.
*
* SPDX-License-Identifier: BSD-3-Clause
*/
#include <arch.h>
#include <arch_helpers.h>
#include <assert.h>
#include <debug.h>
#include <limits.h>
#include <stdarg.h>
#include <stdint.h>
/***********************************************************
* The tf_printf implementation for all BL stages
***********************************************************/
#define get_num_va_args(_args, _lcount) \
(((_lcount) > 1) ? va_arg(_args, long long int) : \
((_lcount) ? va_arg(_args, long int) : va_arg(_args, int)))
#define get_unum_va_args(_args, _lcount) \
(((_lcount) > 1) ? va_arg(_args, unsigned long long int) : \
((_lcount) ? va_arg(_args, unsigned long int) : va_arg(_args, unsigned int)))
void tf_string_print(const char *str)
{
assert(str);
while (*str)
putchar(*str++);
}
static void unsigned_num_print(unsigned long long int unum, unsigned int radix,
char padc, int padn)
{
/* Just need enough space to store 64 bit decimal integer */
unsigned char num_buf[20];
int i = 0, rem;
do {
rem = unum % radix;
if (rem < 0xa)
num_buf[i++] = '0' + rem;
else
num_buf[i++] = 'a' + (rem - 0xa);
} while (unum /= radix);
if (padn > 0) {
while (i < padn--) {
putchar(padc);
}
}
while (--i >= 0)
putchar(num_buf[i]);
}
/*******************************************************************
* Reduced format print for Trusted firmware.
* The following type specifiers are supported by this print
* %x - hexadecimal format
* %s - string format
* %d or %i - signed decimal format
* %u - unsigned decimal format
* %p - pointer format
*
* The following length specifiers are supported by this print
* %l - long int (64-bit on AArch64)
* %ll - long long int (64-bit on AArch64)
* %z - size_t sized integer formats (64 bit on AArch64)
*
* The following padding specifiers are supported by this print
* %0NN - Left-pad the number with 0s (NN is a decimal number)
*
* The print exits on all other formats specifiers other than valid
* combinations of the above specifiers.
*******************************************************************/
void tf_vprintf(const char *fmt, va_list args)
{
int l_count;
long long int num;
unsigned long long int unum;
char *str;
char padc = 0; /* Padding character */
int padn; /* Number of characters to pad */
while (*fmt) {
l_count = 0;
padn = 0;
if (*fmt == '%') {
fmt++;
/* Check the format specifier */
loop:
switch (*fmt) {
case 'i': /* Fall through to next one */
case 'd':
num = get_num_va_args(args, l_count);
if (num < 0) {
putchar('-');
unum = (unsigned long long int)-num;
padn--;
} else
unum = (unsigned long long int)num;
unsigned_num_print(unum, 10, padc, padn);
break;
case 's':
str = va_arg(args, char *);
tf_string_print(str);
break;
case 'p':
unum = (uintptr_t)va_arg(args, void *);
if (unum) {
tf_string_print("0x");
padn -= 2;
}
unsigned_num_print(unum, 16, padc, padn);
break;
case 'x':
unum = get_unum_va_args(args, l_count);
unsigned_num_print(unum, 16, padc, padn);
break;
case 'z':
if (sizeof(size_t) == 8)
l_count = 2;
fmt++;
goto loop;
case 'l':
l_count++;
fmt++;
goto loop;
case 'u':
unum = get_unum_va_args(args, l_count);
unsigned_num_print(unum, 10, padc, padn);
break;
case '0':
padc = '0';
padn = 0;
fmt++;
while (1) {
char ch = *fmt;
if (ch < '0' || ch > '9') {
goto loop;
}
padn = (padn * 10) + (ch - '0');
fmt++;
}
default:
/* Exit on any other format specifier */
return;
}
fmt++;
continue;
}
putchar(*fmt++);
}
}
void tf_printf(const char *fmt, ...)
{
va_list va;
va_start(va, fmt);
tf_vprintf(fmt, va);
va_end(va);
}

View File

@@ -0,0 +1,108 @@
/*
* Copyright (c) 2017, ARM Limited and Contributors. All rights reserved.
*
* SPDX-License-Identifier: BSD-3-Clause
*/
#include <debug.h>
#include <platform.h>
#include <stdarg.h>
static void unsigned_dec_print(char **s, size_t n, size_t *chars_printed,
unsigned int unum)
{
/* Enough for a 32-bit unsigned decimal integer (4294967295). */
unsigned char num_buf[10];
int i = 0, rem;
do {
rem = unum % 10;
num_buf[i++] = '0' + rem;
} while (unum /= 10);
while (--i >= 0) {
if (*chars_printed < n)
*(*s)++ = num_buf[i];
(*chars_printed)++;
}
}
/*******************************************************************
* Reduced snprintf to be used for Trusted firmware.
* The following type specifiers are supported:
*
* %d or %i - signed decimal format
* %u - unsigned decimal format
*
* The function panics on all other formats specifiers.
*
* It returns the number of characters that would be written if the
* buffer was big enough. If it returns a value lower than n, the
* whole string has been written.
*******************************************************************/
int tf_snprintf(char *s, size_t n, const char *fmt, ...)
{
va_list args;
int num;
unsigned int unum;
size_t chars_printed = 0;
if (n == 1) {
/* Buffer is too small to actually write anything else. */
*s = '\0';
n = 0;
} else if (n >= 2) {
/* Reserve space for the terminator character. */
n--;
}
va_start(args, fmt);
while (*fmt) {
if (*fmt == '%') {
fmt++;
/* Check the format specifier. */
switch (*fmt) {
case 'i':
case 'd':
num = va_arg(args, int);
if (num < 0) {
if (chars_printed < n)
*s++ = '-';
chars_printed++;
unum = (unsigned int)-num;
} else {
unum = (unsigned int)num;
}
unsigned_dec_print(&s, n, &chars_printed, unum);
break;
case 'u':
unum = va_arg(args, unsigned int);
unsigned_dec_print(&s, n, &chars_printed, unum);
break;
default:
/* Panic on any other format specifier. */
ERROR("tf_snprintf: specifier with ASCII code '%d' not supported.",
*fmt);
plat_panic_handler();
}
fmt++;
continue;
}
if (chars_printed < n)
*s++ = *fmt;
fmt++;
chars_printed++;
}
va_end(args);
if (n > 0)
*s = '\0';
return chars_printed;
}

View File

@@ -0,0 +1,129 @@
Contributing to Trusted Firmware-A
==================================
Getting Started
---------------
- Make sure you have a `GitHub account`_.
- Create an `issue`_ for your work if one does not already exist. This gives
everyone visibility of whether others are working on something similar. Arm
licensees may contact Arm directly via their partner managers instead if
they prefer.
- Note that the `issue`_ tracker for this project is in a separate
`issue tracking repository`_. Please follow the guidelines in that
repository.
- If you intend to include Third Party IP in your contribution, please
raise a separate `issue`_ for this and ensure that the changes that
include Third Party IP are made on a separate topic branch.
- `Fork`_ `arm-trusted-firmware`_ on GitHub.
- Clone the fork to your own machine.
- Create a local topic branch based on the `arm-trusted-firmware`_ ``master``
branch.
Making Changes
--------------
- Make commits of logical units. See these general `Git guidelines`_ for
contributing to a project.
- Follow the `Linux coding style`_; this style is enforced for the TF-A
project (style errors only, not warnings).
- Use the checkpatch.pl script provided with the Linux source tree. A
Makefile target is provided for convenience (see section 2 in the
`User Guide`_).
- Keep the commits on topic. If you need to fix another bug or make another
enhancement, please create a separate `issue`_ and address it on a separate
topic branch.
- Avoid long commit series. If you do have a long series, consider whether
some commits should be squashed together or addressed in a separate topic.
- Make sure your commit messages are in the proper format. If a commit fixes
a GitHub `issue`_, include a reference (e.g.
"fixes arm-software/tf-issues#45"); this ensures the `issue`_ is
`automatically closed`_ when merged into the `arm-trusted-firmware`_ ``master``
branch.
- Where appropriate, please update the documentation.
- Consider whether the `User Guide`_, `Porting Guide`_, `Firmware Design`_ or
other in-source documentation needs updating.
- Ensure that each changed file has the correct copyright and license
information. Files that entirely consist of contributions to this
project should have the copyright notice and BSD-3-Clause SPDX license
identifier as shown in `license.rst`_. Files that contain
changes to imported Third Party IP should contain a notice as follows,
with the original copyright and license text retained:
::
Portions copyright (c) [XXXX-]YYYY, Arm Limited and Contributors. All rights reserved.
where XXXX is the year of first contribution (if different to YYYY) and
YYYY is the year of most recent contribution.
- If not done previously, you may add your name or your company name to
the `Acknowledgements`_ file.
- If you are submitting new files that you intend to be the technical
sub-maintainer for (for example, a new platform port), then also update
the `Maintainers`_ file.
- For topics with multiple commits, you should make all documentation
changes (and nothing else) in the last commit of the series. Otherwise,
include the documentation changes within the single commit.
- Please test your changes. As a minimum, ensure UEFI boots to the shell on
the Foundation FVP. See `Running the software on FVP`_ for more information.
Submitting Changes
------------------
- Ensure that each commit in the series has at least one ``Signed-off-by:``
line, using your real name and email address. The names in the
``Signed-off-by:`` and ``Author:`` lines must match. If anyone else contributes
to the commit, they must also add their own ``Signed-off-by:`` line.
By adding this line the contributor certifies the contribution is made under
the terms of the `Developer Certificate of Origin (DCO)`_.
- Push your local changes to your fork of the repository.
- Submit a `pull request`_ to the `arm-trusted-firmware`_ ``integration`` branch.
- The changes in the `pull request`_ will then undergo further review and
testing by the `Maintainers`_. Any review comments will be made as
comments on the `pull request`_. This may require you to do some rework.
- When the changes are accepted, the `Maintainers`_ will integrate them.
- Typically, the `Maintainers`_ will merge the `pull request`_ into the
``integration`` branch within the GitHub UI, creating a merge commit.
- Please avoid creating merge commits in the `pull request`_ itself.
- If the `pull request`_ is not based on a recent commit, the `Maintainers`_
may rebase it onto the ``master`` branch first, or ask you to do this.
- If the `pull request`_ cannot be automatically merged, the `Maintainers`_
will ask you to rebase it onto the ``master`` branch.
- After final integration testing, the `Maintainers`_ will push your merge
commit to the ``master`` branch. If a problem is found during integration,
the merge commit will be removed from the ``integration`` branch and the
`Maintainers`_ will ask you to create a new pull request to resolve the
problem.
- Please do not delete your topic branch until it is safely merged into
the ``master`` branch.
--------------
*Copyright (c) 2013-2018, Arm Limited and Contributors. All rights reserved.*
.. _GitHub account: https://github.com/signup/free
.. _issue: https://github.com/ARM-software/tf-issues/issues
.. _issue tracking repository: https://github.com/ARM-software/tf-issues
.. _Fork: https://help.github.com/articles/fork-a-repo
.. _arm-trusted-firmware: https://github.com/ARM-software/arm-trusted-firmware
.. _Git guidelines: http://git-scm.com/book/ch5-2.html
.. _Linux coding style: https://www.kernel.org/doc/Documentation/CodingStyle
.. _User Guide: ./docs/user-guide.rst
.. _automatically closed: https://help.github.com/articles/closing-issues-via-commit-messages
.. _Porting Guide: ./docs/porting-guide.rst
.. _Firmware Design: ./docs/firmware-design.rst
.. _license.rst: ./license.rst
.. _Acknowledgements: ./acknowledgements.rst
.. _Maintainers: ./maintainers.rst
.. _Running the software on FVP: ./docs/user-guide.rst#user-content-running-the-software-on-fvp
.. _Developer Certificate of Origin (DCO): ./dco.txt
.. _pull request: https://help.github.com/articles/using-pull-requests

View File

@@ -0,0 +1,37 @@
Developer Certificate of Origin
Version 1.1
Copyright (C) 2004, 2006 The Linux Foundation and its contributors.
1 Letterman Drive
Suite D4700
San Francisco, CA, 94129
Everyone is permitted to copy and distribute verbatim copies of this
license document, but changing it is not allowed.
Developer's Certificate of Origin 1.1
By making a contribution to this project, I certify that:
(a) The contribution was created in whole or in part by me and I
have the right to submit it under the open source license
indicated in the file; or
(b) The contribution is based upon previous work that, to the best
of my knowledge, is covered under an appropriate open source
license and I have the right under that license to submit that
work with modifications, whether created in whole or in part
by me, under the same open source license (unless I am
permitted to submit under a different license), as indicated
in the file; or
(c) The contribution was provided directly to me by some other
person who certified (a), (b) or (c) and I have not modified
it.
(d) I understand and agree that this project and the contribution
are public and that a record of the contribution (including all
personal information I submit with it, including my sign-off) is
maintained indefinitely and may be redistributed consistent with
this project or the open source license(s) involved.

View File

@@ -0,0 +1,96 @@
Arm SiP Service
===============
This document enumerates and describes the Arm SiP (Silicon Provider) services.
SiP services are non-standard, platform-specific services offered by the silicon
implementer or platform provider. They are accessed via. ``SMC`` ("SMC calls")
instruction executed from Exception Levels below EL3. SMC calls for SiP
services:
- Follow `SMC Calling Convention`_;
- Use SMC function IDs that fall in the SiP range, which are ``0xc2000000`` -
``0xc200ffff`` for 64-bit calls, and ``0x82000000`` - ``0x8200ffff`` for 32-bit
calls.
The Arm SiP implementation offers the following services:
- Performance Measurement Framework (PMF)
- Execution State Switching service
Source definitions for Arm SiP service are located in the ``arm_sip_svc.h`` header
file.
Performance Measurement Framework (PMF)
---------------------------------------
The `Performance Measurement Framework`_
allows callers to retrieve timestamps captured at various paths in TF-A
execution. It's described in detail in `Firmware Design document`_.
Execution State Switching service
---------------------------------
Execution State Switching service provides a mechanism for a non-secure lower
Exception Level (either EL2, or NS EL1 if EL2 isn't implemented) to request to
switch its execution state (a.k.a. Register Width), either from AArch64 to
AArch32, or from AArch32 to AArch64, for the calling CPU. This service is only
available when Trusted Firmware-A (TF-A) is built for AArch64 (i.e. when build
option ``ARCH`` is set to ``aarch64``).
``ARM_SIP_SVC_EXE_STATE_SWITCH``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
::
Arguments:
uint32_t Function ID
uint32_t PC hi
uint32_t PC lo
uint32_t Cookie hi
uint32_t Cookie lo
Return:
uint32_t
The function ID parameter must be ``0x82000020``. It uniquely identifies the
Execution State Switching service being requested.
The parameters *PC hi* and *PC lo* defines upper and lower words, respectively,
of the entry point (physical address) at which execution should start, after
Execution State has been switched. When calling from AArch64, *PC hi* must be 0.
When execution starts at the supplied entry point after Execution State has been
switched, the parameters *Cookie hi* and *Cookie lo* are passed in CPU registers
0 and 1, respectively. When calling from AArch64, *Cookie hi* must be 0.
This call can only be made on the primary CPU, before any secondaries were
brought up with ``CPU_ON`` PSCI call. Otherwise, the call will always fail.
The effect of switching execution state is as if the Exception Level were
entered for the first time, following power on. This means CPU registers that
have a defined reset value by the Architecture will assume that value. Other
registers should not be expected to hold their values before the call was made.
CPU endianness, however, is preserved from the previous execution state. Note
that this switches the execution state of the calling CPU only. This is not a
substitute for PSCI ``SYSTEM_RESET``.
The service may return the following error codes:
- ``STATE_SW_E_PARAM``: If any of the parameters were deemed invalid for
a specific request.
- ``STATE_SW_E_DENIED``: If the call is not successful, or when TF-A is
built for AArch32.
If the call is successful, the caller wouldn't observe the SMC returning.
Instead, execution starts at the supplied entry point, with the CPU registers 0
and 1 populated with the supplied *Cookie hi* and *Cookie lo* values,
respectively.
--------------
*Copyright (c) 2017-2018, Arm Limited and Contributors. All rights reserved.*
.. _SMC Calling Convention: http://infocenter.arm.com/help/topic/com.arm.doc.den0028a/index.html
.. _Performance Measurement Framework: ./firmware-design.rst#user-content-performance-measurement-framework
.. _Firmware Design document: ./firmware-design.rst

View File

@@ -0,0 +1,941 @@
Abstracting a Chain of Trust
============================
.. section-numbering::
:suffix: .
.. contents::
The aim of this document is to describe the authentication framework
implemented in Trusted Firmware-A (TF-A). This framework fulfills the
following requirements:
#. It should be possible for a platform port to specify the Chain of Trust in
terms of certificate hierarchy and the mechanisms used to verify a
particular image/certificate.
#. The framework should distinguish between:
- The mechanism used to encode and transport information, e.g. DER encoded
X.509v3 certificates to ferry Subject Public Keys, hashes and non-volatile
counters.
- The mechanism used to verify the transported information i.e. the
cryptographic libraries.
The framework has been designed following a modular approach illustrated in the
next diagram:
::
+---------------+---------------+------------+
| Trusted | Trusted | Trusted |
| Firmware | Firmware | Firmware |
| Generic | IO Framework | Platform |
| Code i.e. | (IO) | Port |
| BL1/BL2 (GEN) | | (PP) |
+---------------+---------------+------------+
^ ^ ^
| | |
v v v
+-----------+ +-----------+ +-----------+
| | | | | Image |
| Crypto | | Auth | | Parser |
| Module |<->| Module |<->| Module |
| (CM) | | (AM) | | (IPM) |
| | | | | |
+-----------+ +-----------+ +-----------+
^ ^
| |
v v
+----------------+ +-----------------+
| Cryptographic | | Image Parser |
| Libraries (CL) | | Libraries (IPL) |
+----------------+ +-----------------+
| |
| |
| |
v v
+-----------------+
| Misc. Libs e.g. |
| ASN.1 decoder |
| |
+-----------------+
DIAGRAM 1.
This document describes the inner details of the authentication framework and
the abstraction mechanisms available to specify a Chain of Trust.
Framework design
----------------
This section describes some aspects of the framework design and the rationale
behind them. These aspects are key to verify a Chain of Trust.
Chain of Trust
~~~~~~~~~~~~~~
A CoT is basically a sequence of authentication images which usually starts with
a root of trust and culminates in a single data image. The following diagram
illustrates how this maps to a CoT for the BL31 image described in the
TBBR-Client specification.
::
+------------------+ +-------------------+
| ROTPK/ROTPK Hash |------>| Trusted Key |
+------------------+ | Certificate |
| (Auth Image) |
/+-------------------+
/ |
/ |
/ |
/ |
L v
+------------------+ +-------------------+
| Trusted World |------>| BL31 Key |
| Public Key | | Certificate |
+------------------+ | (Auth Image) |
+-------------------+
/ |
/ |
/ |
/ |
/ v
+------------------+ L +-------------------+
| BL31 Content |------>| BL31 Content |
| Certificate PK | | Certificate |
+------------------+ | (Auth Image) |
+-------------------+
/ |
/ |
/ |
/ |
/ v
+------------------+ L +-------------------+
| BL31 Hash |------>| BL31 Image |
| | | (Data Image) |
+------------------+ | |
+-------------------+
DIAGRAM 2.
The root of trust is usually a public key (ROTPK) that has been burnt in the
platform and cannot be modified.
Image types
~~~~~~~~~~~
Images in a CoT are categorised as authentication and data images. An
authentication image contains information to authenticate a data image or
another authentication image. A data image is usually a boot loader binary, but
it could be any other data that requires authentication.
Component responsibilities
~~~~~~~~~~~~~~~~~~~~~~~~~~
For every image in a Chain of Trust, the following high level operations are
performed to verify it:
#. Allocate memory for the image either statically or at runtime.
#. Identify the image and load it in the allocated memory.
#. Check the integrity of the image as per its type.
#. Authenticate the image as per the cryptographic algorithms used.
#. If the image is an authentication image, extract the information that will
be used to authenticate the next image in the CoT.
In Diagram 1, each component is responsible for one or more of these operations.
The responsibilities are briefly described below.
TF-A Generic code and IO framework (GEN/IO)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
These components are responsible for initiating the authentication process for a
particular image in BL1 or BL2. For each BL image that requires authentication,
the Generic code asks recursively the Authentication module what is the parent
image until either an authenticated image or the ROT is reached. Then the
Generic code calls the IO framewotk to load the image and calls the
Authentication module to authenticate it, following the CoT from ROT to Image.
TF-A Platform Port (PP)
^^^^^^^^^^^^^^^^^^^^^^^
The platform is responsible for:
#. Specifying the CoT for each image that needs to be authenticated. Details of
how a CoT can be specified by the platform are explained later. The platform
also specifies the authentication methods and the parsing method used for
each image.
#. Statically allocating memory for each parameter in each image which is
used for verifying the CoT, e.g. memory for public keys, hashes etc.
#. Providing the ROTPK or a hash of it.
#. Providing additional information to the IPM to enable it to identify and
extract authentication parameters contained in an image, e.g. if the
parameters are stored as X509v3 extensions, the corresponding OID must be
provided.
#. Fulfill any other memory requirements of the IPM and the CM (not currently
described in this document).
#. Export functions to verify an image which uses an authentication method that
cannot be interpreted by the CM, e.g. if an image has to be verified using a
NV counter, then the value of the counter to compare with can only be
provided by the platform.
#. Export a custom IPM if a proprietary image format is being used (described
later).
Authentication Module (AM)
^^^^^^^^^^^^^^^^^^^^^^^^^^
It is responsible for:
#. Providing the necessary abstraction mechanisms to describe a CoT. Amongst
other things, the authentication and image parsing methods must be specified
by the PP in the CoT.
#. Verifying the CoT passed by GEN by utilising functionality exported by the
PP, IPM and CM.
#. Tracking which images have been verified. In case an image is a part of
multiple CoTs then it should be verified only once e.g. the Trusted World
Key Certificate in the TBBR-Client spec. contains information to verify
SCP\_BL2, BL31, BL32 each of which have a separate CoT. (This
responsibility has not been described in this document but should be
trivial to implement).
#. Reusing memory meant for a data image to verify authentication images e.g.
in the CoT described in Diagram 2, each certificate can be loaded and
verified in the memory reserved by the platform for the BL31 image. By the
time BL31 (the data image) is loaded, all information to authenticate it
will have been extracted from the parent image i.e. BL31 content
certificate. It is assumed that the size of an authentication image will
never exceed the size of a data image. It should be possible to verify this
at build time using asserts.
Cryptographic Module (CM)
^^^^^^^^^^^^^^^^^^^^^^^^^
The CM is responsible for providing an API to:
#. Verify a digital signature.
#. Verify a hash.
The CM does not include any cryptography related code, but it relies on an
external library to perform the cryptographic operations. A Crypto-Library (CL)
linking the CM and the external library must be implemented. The following
functions must be provided by the CL:
.. code:: c
void (*init)(void);
int (*verify_signature)(void *data_ptr, unsigned int data_len,
void *sig_ptr, unsigned int sig_len,
void *sig_alg, unsigned int sig_alg_len,
void *pk_ptr, unsigned int pk_len);
int (*verify_hash)(void *data_ptr, unsigned int data_len,
void *digest_info_ptr, unsigned int digest_info_len);
These functions are registered in the CM using the macro:
.. code:: c
REGISTER_CRYPTO_LIB(_name, _init, _verify_signature, _verify_hash);
``_name`` must be a string containing the name of the CL. This name is used for
debugging purposes.
Image Parser Module (IPM)
^^^^^^^^^^^^^^^^^^^^^^^^^
The IPM is responsible for:
#. Checking the integrity of each image loaded by the IO framework.
#. Extracting parameters used for authenticating an image based upon a
description provided by the platform in the CoT descriptor.
Images may have different formats (for example, authentication images could be
x509v3 certificates, signed ELF files or any other platform specific format).
The IPM allows to register an Image Parser Library (IPL) for every image format
used in the CoT. This library must implement the specific methods to parse the
image. The IPM obtains the image format from the CoT and calls the right IPL to
check the image integrity and extract the authentication parameters.
See Section "Describing the image parsing methods" for more details about the
mechanism the IPM provides to define and register IPLs.
Authentication methods
~~~~~~~~~~~~~~~~~~~~~~
The AM supports the following authentication methods:
#. Hash
#. Digital signature
The platform may specify these methods in the CoT in case it decides to define
a custom CoT instead of reusing a predefined one.
If a data image uses multiple methods, then all the methods must be a part of
the same CoT. The number and type of parameters are method specific. These
parameters should be obtained from the parent image using the IPM.
#. Hash
Parameters:
#. A pointer to data to hash
#. Length of the data
#. A pointer to the hash
#. Length of the hash
The hash will be represented by the DER encoding of the following ASN.1
type:
::
DigestInfo ::= SEQUENCE {
digestAlgorithm DigestAlgorithmIdentifier,
digest Digest
}
This ASN.1 structure makes it possible to remove any assumption about the
type of hash algorithm used as this information accompanies the hash. This
should allow the Cryptography Library (CL) to support multiple hash
algorithm implementations.
#. Digital Signature
Parameters:
#. A pointer to data to sign
#. Length of the data
#. Public Key Algorithm
#. Public Key value
#. Digital Signature Algorithm
#. Digital Signature value
The Public Key parameters will be represented by the DER encoding of the
following ASN.1 type:
::
SubjectPublicKeyInfo ::= SEQUENCE {
algorithm AlgorithmIdentifier{PUBLIC-KEY,{PublicKeyAlgorithms}},
subjectPublicKey BIT STRING }
The Digital Signature Algorithm will be represented by the DER encoding of
the following ASN.1 types.
::
AlgorithmIdentifier {ALGORITHM:IOSet } ::= SEQUENCE {
algorithm ALGORITHM.&id({IOSet}),
parameters ALGORITHM.&Type({IOSet}{@algorithm}) OPTIONAL
}
The digital signature will be represented by:
::
signature ::= BIT STRING
The authentication framework will use the image descriptor to extract all the
information related to authentication.
Specifying a Chain of Trust
---------------------------
A CoT can be described as a set of image descriptors linked together in a
particular order. The order dictates the sequence in which they must be
verified. Each image has a set of properties which allow the AM to verify it.
These properties are described below.
The PP is responsible for defining a single or multiple CoTs for a data image.
Unless otherwise specified, the data structures described in the following
sections are populated by the PP statically.
Describing the image parsing methods
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The parsing method refers to the format of a particular image. For example, an
authentication image that represents a certificate could be in the X.509v3
format. A data image that represents a boot loader stage could be in raw binary
or ELF format. The IPM supports three parsing methods. An image has to use one
of the three methods described below. An IPL is responsible for interpreting a
single parsing method. There has to be one IPL for every method used by the
platform.
#. Raw format: This format is effectively a nop as an image using this method
is treated as being in raw binary format e.g. boot loader images used by
TF-A. This method should only be used by data images.
#. X509V3 method: This method uses industry standards like X.509 to represent
PKI certificates (authentication images). It is expected that open source
libraries will be available which can be used to parse an image represented
by this method. Such libraries can be used to write the corresponding IPL
e.g. the X.509 parsing library code in mbed TLS.
#. Platform defined method: This method caters for platform specific
proprietary standards to represent authentication or data images. For
example, The signature of a data image could be appended to the data image
raw binary. A header could be prepended to the combined blob to specify the
extents of each component. The platform will have to implement the
corresponding IPL to interpret such a format.
The following enum can be used to define these three methods.
.. code:: c
typedef enum img_type_enum {
IMG_RAW, /* Binary image */
IMG_PLAT, /* Platform specific format */
IMG_CERT, /* X509v3 certificate */
IMG_MAX_TYPES,
} img_type_t;
An IPL must provide functions with the following prototypes:
.. code:: c
void init(void);
int check_integrity(void *img, unsigned int img_len);
int get_auth_param(const auth_param_type_desc_t *type_desc,
void *img, unsigned int img_len,
void **param, unsigned int *param_len);
An IPL for each type must be registered using the following macro:
::
REGISTER_IMG_PARSER_LIB(_type, _name, _init, _check_int, _get_param)
- ``_type``: one of the types described above.
- ``_name``: a string containing the IPL name for debugging purposes.
- ``_init``: initialization function pointer.
- ``_check_int``: check image integrity function pointer.
- ``_get_param``: extract authentication parameter funcion pointer.
The ``init()`` function will be used to initialize the IPL.
The ``check_integrity()`` function is passed a pointer to the memory where the
image has been loaded by the IO framework and the image length. It should ensure
that the image is in the format corresponding to the parsing method and has not
been tampered with. For example, RFC-2459 describes a validation sequence for an
X.509 certificate.
The ``get_auth_param()`` function is passed a parameter descriptor containing
information about the parameter (``type_desc`` and ``cookie``) to identify and
extract the data corresponding to that parameter from an image. This data will
be used to verify either the current or the next image in the CoT sequence.
Each image in the CoT will specify the parsing method it uses. This information
will be used by the IPM to find the right parser descriptor for the image.
Describing the authentication method(s)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
As part of the CoT, each image has to specify one or more authentication methods
which will be used to verify it. As described in the Section "Authentication
methods", there are three methods supported by the AM.
.. code:: c
typedef enum {
AUTH_METHOD_NONE,
AUTH_METHOD_HASH,
AUTH_METHOD_SIG,
AUTH_METHOD_NUM
} auth_method_type_t;
The AM defines the type of each parameter used by an authentication method. It
uses this information to:
#. Specify to the ``get_auth_param()`` function exported by the IPM, which
parameter should be extracted from an image.
#. Correctly marshall the parameters while calling the verification function
exported by the CM and PP.
#. Extract authentication parameters from a parent image in order to verify a
child image e.g. to verify the certificate image, the public key has to be
obtained from the parent image.
.. code:: c
typedef enum {
AUTH_PARAM_NONE,
AUTH_PARAM_RAW_DATA, /* Raw image data */
AUTH_PARAM_SIG, /* The image signature */
AUTH_PARAM_SIG_ALG, /* The image signature algorithm */
AUTH_PARAM_HASH, /* A hash (including the algorithm) */
AUTH_PARAM_PUB_KEY, /* A public key */
} auth_param_type_t;
The AM defines the following structure to identify an authentication parameter
required to verify an image.
.. code:: c
typedef struct auth_param_type_desc_s {
auth_param_type_t type;
void *cookie;
} auth_param_type_desc_t;
``cookie`` is used by the platform to specify additional information to the IPM
which enables it to uniquely identify the parameter that should be extracted
from an image. For example, the hash of a BL3x image in its corresponding
content certificate is stored in an X509v3 custom extension field. An extension
field can only be identified using an OID. In this case, the ``cookie`` could
contain the pointer to the OID defined by the platform for the hash extension
field while the ``type`` field could be set to ``AUTH_PARAM_HASH``. A value of 0 for
the ``cookie`` field means that it is not used.
For each method, the AM defines a structure with the parameters required to
verify the image.
.. code:: c
/*
* Parameters for authentication by hash matching
*/
typedef struct auth_method_param_hash_s {
auth_param_type_desc_t *data; /* Data to hash */
auth_param_type_desc_t *hash; /* Hash to match with */
} auth_method_param_hash_t;
/*
* Parameters for authentication by signature
*/
typedef struct auth_method_param_sig_s {
auth_param_type_desc_t *pk; /* Public key */
auth_param_type_desc_t *sig; /* Signature to check */
auth_param_type_desc_t *alg; /* Signature algorithm */
auth_param_type_desc_t *tbs; /* Data signed */
} auth_method_param_sig_t;
The AM defines the following structure to describe an authentication method for
verifying an image
.. code:: c
/*
* Authentication method descriptor
*/
typedef struct auth_method_desc_s {
auth_method_type_t type;
union {
auth_method_param_hash_t hash;
auth_method_param_sig_t sig;
} param;
} auth_method_desc_t;
Using the method type specified in the ``type`` field, the AM finds out what field
needs to access within the ``param`` union.
Storing Authentication parameters
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
A parameter described by ``auth_param_type_desc_t`` to verify an image could be
obtained from either the image itself or its parent image. The memory allocated
for loading the parent image will be reused for loading the child image. Hence
parameters which are obtained from the parent for verifying a child image need
to have memory allocated for them separately where they can be stored. This
memory must be statically allocated by the platform port.
The AM defines the following structure to store the data corresponding to an
authentication parameter.
.. code:: c
typedef struct auth_param_data_desc_s {
void *auth_param_ptr;
unsigned int auth_param_len;
} auth_param_data_desc_t;
The ``auth_param_ptr`` field is initialized by the platform. The ``auth_param_len``
field is used to specify the length of the data in the memory.
For parameters that can be obtained from the child image itself, the IPM is
responsible for populating the ``auth_param_ptr`` and ``auth_param_len`` fields
while executing the ``img_get_auth_param()`` function.
The AM defines the following structure to enable an image to describe the
parameters that should be extracted from it and used to verify the next image
(child) in a CoT.
.. code:: c
typedef struct auth_param_desc_s {
auth_param_type_desc_t type_desc;
auth_param_data_desc_t data;
} auth_param_desc_t;
Describing an image in a CoT
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
An image in a CoT is a consolidation of the following aspects of a CoT described
above.
#. A unique identifier specified by the platform which allows the IO framework
to locate the image in a FIP and load it in the memory reserved for the data
image in the CoT.
#. A parsing method which is used by the AM to find the appropriate IPM.
#. Authentication methods and their parameters as described in the previous
section. These are used to verify the current image.
#. Parameters which are used to verify the next image in the current CoT. These
parameters are specified only by authentication images and can be extracted
from the current image once it has been verified.
The following data structure describes an image in a CoT.
.. code:: c
typedef struct auth_img_desc_s {
unsigned int img_id;
const struct auth_img_desc_s *parent;
img_type_t img_type;
auth_method_desc_t img_auth_methods[AUTH_METHOD_NUM];
auth_param_desc_t authenticated_data[COT_MAX_VERIFIED_PARAMS];
} auth_img_desc_t;
A CoT is defined as an array of ``auth_image_desc_t`` structures linked together
by the ``parent`` field. Those nodes with no parent must be authenticated using
the ROTPK stored in the platform.
Implementation example
----------------------
This section is a detailed guide explaining a trusted boot implementation using
the authentication framework. This example corresponds to the Applicative
Functional Mode (AFM) as specified in the TBBR-Client document. It is
recommended to read this guide along with the source code.
The TBBR CoT
~~~~~~~~~~~~
The CoT can be found in ``drivers/auth/tbbr/tbbr_cot.c``. This CoT consists of an
array of image descriptors and it is registered in the framework using the macro
``REGISTER_COT(cot_desc)``, where 'cot\_desc' must be the name of the array
(passing a pointer or any other type of indirection will cause the registration
process to fail).
The number of images participating in the boot process depends on the CoT. There
is, however, a minimum set of images that are mandatory in TF-A and thus all
CoTs must present:
- ``BL2``
- ``SCP_BL2`` (platform specific)
- ``BL31``
- ``BL32`` (optional)
- ``BL33``
The TBBR specifies the additional certificates that must accompany these images
for a proper authentication. Details about the TBBR CoT may be found in the
`Trusted Board Boot`_ document.
Following the `Platform Porting Guide`_, a platform must provide unique
identifiers for all the images and certificates that will be loaded during the
boot process. If a platform is using the TBBR as a reference for trusted boot,
these identifiers can be obtained from ``include/common/tbbr/tbbr_img_def.h``.
Arm platforms include this file in ``include/plat/arm/common/arm_def.h``. Other
platforms may also include this file or provide their own identifiers.
**Important**: the authentication module uses these identifiers to index the
CoT array, so the descriptors location in the array must match the identifiers.
Each image descriptor must specify:
- ``img_id``: the corresponding image unique identifier defined by the platform.
- ``img_type``: the image parser module uses the image type to call the proper
parsing library to check the image integrity and extract the required
authentication parameters. Three types of images are currently supported:
- ``IMG_RAW``: image is a raw binary. No parsing functions are available,
other than reading the whole image.
- ``IMG_PLAT``: image format is platform specific. The platform may use this
type for custom images not directly supported by the authentication
framework.
- ``IMG_CERT``: image is an x509v3 certificate.
- ``parent``: pointer to the parent image descriptor. The parent will contain
the information required to authenticate the current image. If the parent
is NULL, the authentication parameters will be obtained from the platform
(i.e. the BL2 and Trusted Key certificates are signed with the ROT private
key, whose public part is stored in the platform).
- ``img_auth_methods``: this array defines the authentication methods that must
be checked to consider an image authenticated. Each method consists of a
type and a list of parameter descriptors. A parameter descriptor consists of
a type and a cookie which will point to specific information required to
extract that parameter from the image (i.e. if the parameter is stored in an
x509v3 extension, the cookie will point to the extension OID). Depending on
the method type, a different number of parameters must be specified.
Supported methods are:
- ``AUTH_METHOD_HASH``: the hash of the image must match the hash extracted
from the parent image. The following parameter descriptors must be
specified:
- ``data``: data to be hashed (obtained from current image)
- ``hash``: reference hash (obtained from parent image)
- ``AUTH_METHOD_SIG``: the image (usually a certificate) must be signed with
the private key whose public part is extracted from the parent image (or
the platform if the parent is NULL). The following parameter descriptors
must be specified:
- ``pk``: the public key (obtained from parent image)
- ``sig``: the digital signature (obtained from current image)
- ``alg``: the signature algorithm used (obtained from current image)
- ``data``: the data to be signed (obtained from current image)
- ``authenticated_data``: this array indicates what authentication parameters
must be extracted from an image once it has been authenticated. Each
parameter consists of a parameter descriptor and the buffer address/size
to store the parameter. The CoT is responsible for allocating the required
memory to store the parameters.
In the ``tbbr_cot.c`` file, a set of buffers are allocated to store the parameters
extracted from the certificates. In the case of the TBBR CoT, these parameters
are hashes and public keys. In DER format, an RSA-2048 public key requires 294
bytes, and a hash requires 51 bytes. Depending on the CoT and the authentication
process, some of the buffers may be reused at different stages during the boot.
Next in that file, the parameter descriptors are defined. These descriptors will
be used to extract the parameter data from the corresponding image.
Example: the BL31 Chain of Trust
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Four image descriptors form the BL31 Chain of Trust:
.. code:: asm
[TRUSTED_KEY_CERT_ID] = {
.img_id = TRUSTED_KEY_CERT_ID,
.img_type = IMG_CERT,
.parent = NULL,
.img_auth_methods = {
[0] = {
.type = AUTH_METHOD_SIG,
.param.sig = {
.pk = &subject_pk,
.sig = &sig,
.alg = &sig_alg,
.data = &raw_data,
}
}
},
.authenticated_data = {
[0] = {
.type_desc = &trusted_world_pk,
.data = {
.ptr = (void *)trusted_world_pk_buf,
.len = (unsigned int)PK_DER_LEN
}
},
[1] = {
.type_desc = &non_trusted_world_pk,
.data = {
.ptr = (void *)non_trusted_world_pk_buf,
.len = (unsigned int)PK_DER_LEN
}
}
}
},
[SOC_FW_KEY_CERT_ID] = {
.img_id = SOC_FW_KEY_CERT_ID,
.img_type = IMG_CERT,
.parent = &cot_desc[TRUSTED_KEY_CERT_ID],
.img_auth_methods = {
[0] = {
.type = AUTH_METHOD_SIG,
.param.sig = {
.pk = &trusted_world_pk,
.sig = &sig,
.alg = &sig_alg,
.data = &raw_data,
}
}
},
.authenticated_data = {
[0] = {
.type_desc = &soc_fw_content_pk,
.data = {
.ptr = (void *)content_pk_buf,
.len = (unsigned int)PK_DER_LEN
}
}
}
},
[SOC_FW_CONTENT_CERT_ID] = {
.img_id = SOC_FW_CONTENT_CERT_ID,
.img_type = IMG_CERT,
.parent = &cot_desc[SOC_FW_KEY_CERT_ID],
.img_auth_methods = {
[0] = {
.type = AUTH_METHOD_SIG,
.param.sig = {
.pk = &soc_fw_content_pk,
.sig = &sig,
.alg = &sig_alg,
.data = &raw_data,
}
}
},
.authenticated_data = {
[0] = {
.type_desc = &soc_fw_hash,
.data = {
.ptr = (void *)soc_fw_hash_buf,
.len = (unsigned int)HASH_DER_LEN
}
}
}
},
[BL31_IMAGE_ID] = {
.img_id = BL31_IMAGE_ID,
.img_type = IMG_RAW,
.parent = &cot_desc[SOC_FW_CONTENT_CERT_ID],
.img_auth_methods = {
[0] = {
.type = AUTH_METHOD_HASH,
.param.hash = {
.data = &raw_data,
.hash = &soc_fw_hash,
}
}
}
}
The **Trusted Key certificate** is signed with the ROT private key and contains
the Trusted World public key and the Non-Trusted World public key as x509v3
extensions. This must be specified in the image descriptor using the
``img_auth_methods`` and ``authenticated_data`` arrays, respectively.
The Trusted Key certificate is authenticated by checking its digital signature
using the ROTPK. Four parameters are required to check a signature: the public
key, the algorithm, the signature and the data that has been signed. Therefore,
four parameter descriptors must be specified with the authentication method:
- ``subject_pk``: parameter descriptor of type ``AUTH_PARAM_PUB_KEY``. This type
is used to extract a public key from the parent image. If the cookie is an
OID, the key is extracted from the corresponding x509v3 extension. If the
cookie is NULL, the subject public key is retrieved. In this case, because
the parent image is NULL, the public key is obtained from the platform
(this key will be the ROTPK).
- ``sig``: parameter descriptor of type ``AUTH_PARAM_SIG``. It is used to extract
the signature from the certificate.
- ``sig_alg``: parameter descriptor of type ``AUTH_PARAM_SIG``. It is used to
extract the signature algorithm from the certificate.
- ``raw_data``: parameter descriptor of type ``AUTH_PARAM_RAW_DATA``. It is used
to extract the data to be signed from the certificate.
Once the signature has been checked and the certificate authenticated, the
Trusted World public key needs to be extracted from the certificate. A new entry
is created in the ``authenticated_data`` array for that purpose. In that entry,
the corresponding parameter descriptor must be specified along with the buffer
address to store the parameter value. In this case, the ``tz_world_pk`` descriptor
is used to extract the public key from an x509v3 extension with OID
``TRUSTED_WORLD_PK_OID``. The BL31 key certificate will use this descriptor as
parameter in the signature authentication method. The key is stored in the
``plat_tz_world_pk_buf`` buffer.
The **BL31 Key certificate** is authenticated by checking its digital signature
using the Trusted World public key obtained previously from the Trusted Key
certificate. In the image descriptor, we specify a single authentication method
by signature whose public key is the ``tz_world_pk``. Once this certificate has
been authenticated, we have to extract the BL31 public key, stored in the
extension specified by ``bl31_content_pk``. This key will be copied to the
``plat_content_pk`` buffer.
The **BL31 certificate** is authenticated by checking its digital signature
using the BL31 public key obtained previously from the BL31 Key certificate.
We specify the authentication method using ``bl31_content_pk`` as public key.
After authentication, we need to extract the BL31 hash, stored in the extension
specified by ``bl31_hash``. This hash will be copied to the ``plat_bl31_hash_buf``
buffer.
The **BL31 image** is authenticated by calculating its hash and matching it
with the hash obtained from the BL31 certificate. The image descriptor contains
a single authentication method by hash. The parameters to the hash method are
the reference hash, ``bl31_hash``, and the data to be hashed. In this case, it is
the whole image, so we specify ``raw_data``.
The image parser library
~~~~~~~~~~~~~~~~~~~~~~~~
The image parser module relies on libraries to check the image integrity and
extract the authentication parameters. The number and type of parser libraries
depend on the images used in the CoT. Raw images do not need a library, so
only an x509v3 library is required for the TBBR CoT.
Arm platforms will use an x509v3 library based on mbed TLS. This library may be
found in ``drivers/auth/mbedtls/mbedtls_x509_parser.c``. It exports three
functions:
.. code:: c
void init(void);
int check_integrity(void *img, unsigned int img_len);
int get_auth_param(const auth_param_type_desc_t *type_desc,
void *img, unsigned int img_len,
void **param, unsigned int *param_len);
The library is registered in the framework using the macro
``REGISTER_IMG_PARSER_LIB()``. Each time the image parser module needs to access
an image of type ``IMG_CERT``, it will call the corresponding function exported
in this file.
The build system must be updated to include the corresponding library and
mbed TLS sources. Arm platforms use the ``arm_common.mk`` file to pull the
sources.
The cryptographic library
~~~~~~~~~~~~~~~~~~~~~~~~~
The cryptographic module relies on a library to perform the required operations,
i.e. verify a hash or a digital signature. Arm platforms will use a library
based on mbed TLS, which can be found in
``drivers/auth/mbedtls/mbedtls_crypto.c``. This library is registered in the
authentication framework using the macro ``REGISTER_CRYPTO_LIB()`` and exports
three functions:
.. code:: c
void init(void);
int verify_signature(void *data_ptr, unsigned int data_len,
void *sig_ptr, unsigned int sig_len,
void *sig_alg, unsigned int sig_alg_len,
void *pk_ptr, unsigned int pk_len);
int verify_hash(void *data_ptr, unsigned int data_len,
void *digest_info_ptr, unsigned int digest_info_len);
The mbedTLS library algorithm support is configured by the
``TF_MBEDTLS_KEY_ALG`` variable which can take in 3 values: `rsa`, `ecdsa` or
`rsa+ecdsa`. This variable allows the Makefile to include the corresponding
sources in the build for the various algorthms. Setting the variable to
`rsa+ecdsa` enables support for both rsa and ecdsa algorithms in the mbedTLS
library.
Note: If code size is a concern, the build option ``MBEDTLS_SHA256_SMALLER`` can
be defined in the platform Makefile. It will make mbed TLS use an implementation
of SHA-256 with smaller memory footprint (~1.5 KB less) but slower (~30%).
--------------
*Copyright (c) 2017-2018, Arm Limited and Contributors. All rights reserved.*
.. _Trusted Board Boot: ./trusted-board-boot.rst
.. _Platform Porting Guide: ./porting-guide.rst

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,173 @@
Arm CPU Specific Build Macros
=============================
.. section-numbering::
:suffix: .
.. contents::
This document describes the various build options present in the CPU specific
operations framework to enable errata workarounds and to enable optimizations
for a specific CPU on a platform.
Security Vulnerability Workarounds
----------------------------------
TF-A exports a series of build flags which control which security
vulnerability workarounds should be applied at runtime.
- ``WORKAROUND_CVE_2017_5715``: Enables the security workaround for
`CVE-2017-5715`_. This flag can be set to 0 by the platform if none
of the PEs in the system need the workaround. Setting this flag to 0 provides
no performance benefit for non-affected platforms, it just helps to comply
with the recommendation in the spec regarding workaround discovery.
Defaults to 1.
- ``WORKAROUND_CVE_2018_3639``: Enables the security workaround for
`CVE-2018-3639`_. Defaults to 1. The TF-A project recommends to keep
the default value of 1 even on platforms that are unaffected by
CVE-2018-3639, in order to comply with the recommendation in the spec
regarding workaround discovery.
- ``DYNAMIC_WORKAROUND_CVE_2018_3639``: Enables dynamic mitigation for
`CVE-2018-3639`_. This build option should be set to 1 if the target
platform contains at least 1 CPU that requires dynamic mitigation.
Defaults to 0.
CPU Errata Workarounds
----------------------
TF-A exports a series of build flags which control the errata workarounds that
are applied to each CPU by the reset handler. The errata details can be found
in the CPU specific errata documents published by Arm:
- `Cortex-A53 MPCore Software Developers Errata Notice`_
- `Cortex-A57 MPCore Software Developers Errata Notice`_
- `Cortex-A72 MPCore Software Developers Errata Notice`_
The errata workarounds are implemented for a particular revision or a set of
processor revisions. This is checked by the reset handler at runtime. Each
errata workaround is identified by its ``ID`` as specified in the processor's
errata notice document. The format of the define used to enable/disable the
errata workaround is ``ERRATA_<Processor name>_<ID>``, where the ``Processor name``
is for example ``A57`` for the ``Cortex_A57`` CPU.
Refer to the section *CPU errata status reporting* in
`Firmware Design guide`_ for information on how to write errata workaround
functions.
All workarounds are disabled by default. The platform is responsible for
enabling these workarounds according to its requirement by defining the
errata workaround build flags in the platform specific makefile. In case
these workarounds are enabled for the wrong CPU revision then the errata
workaround is not applied. In the DEBUG build, this is indicated by
printing a warning to the crash console.
In the current implementation, a platform which has more than 1 variant
with different revisions of a processor has no runtime mechanism available
for it to specify which errata workarounds should be enabled or not.
The value of the build flags are 0 by default, that is, disabled. Any other
value will enable it.
For Cortex-A53, following errata build flags are defined :
- ``ERRATA_A53_826319``: This applies errata 826319 workaround to Cortex-A53
CPU. This needs to be enabled only for revision <= r0p2 of the CPU.
- ``ERRATA_A53_835769``: This applies erratum 835769 workaround at compile and
link time to Cortex-A53 CPU. This needs to be enabled for some variants of
revision <= r0p4. This workaround can lead the linker to create ``*.stub``
sections.
- ``ERRATA_A53_836870``: This applies errata 836870 workaround to Cortex-A53
CPU. This needs to be enabled only for revision <= r0p3 of the CPU. From
r0p4 and onwards, this errata is enabled by default in hardware.
- ``ERRATA_A53_843419``: This applies erratum 843419 workaround at link time
to Cortex-A53 CPU. This needs to be enabled for some variants of revision
<= r0p4. This workaround can lead the linker to emit ``*.stub`` sections
which are 4kB aligned.
- ``ERRATA_A53_855873``: This applies errata 855873 workaround to Cortex-A53
CPUs. Though the erratum is present in every revision of the CPU,
this workaround is only applied to CPUs from r0p3 onwards, which feature
a chicken bit in CPUACTLR\_EL1 to enable a hardware workaround.
Earlier revisions of the CPU have other errata which require the same
workaround in software, so they should be covered anyway.
For Cortex-A57, following errata build flags are defined :
- ``ERRATA_A57_806969``: This applies errata 806969 workaround to Cortex-A57
CPU. This needs to be enabled only for revision r0p0 of the CPU.
- ``ERRATA_A57_813419``: This applies errata 813419 workaround to Cortex-A57
CPU. This needs to be enabled only for revision r0p0 of the CPU.
- ``ERRATA_A57_813420``: This applies errata 813420 workaround to Cortex-A57
CPU. This needs to be enabled only for revision r0p0 of the CPU.
- ``ERRATA_A57_826974``: This applies errata 826974 workaround to Cortex-A57
CPU. This needs to be enabled only for revision <= r1p1 of the CPU.
- ``ERRATA_A57_826977``: This applies errata 826977 workaround to Cortex-A57
CPU. This needs to be enabled only for revision <= r1p1 of the CPU.
- ``ERRATA_A57_828024``: This applies errata 828024 workaround to Cortex-A57
CPU. This needs to be enabled only for revision <= r1p1 of the CPU.
- ``ERRATA_A57_829520``: This applies errata 829520 workaround to Cortex-A57
CPU. This needs to be enabled only for revision <= r1p2 of the CPU.
- ``ERRATA_A57_833471``: This applies errata 833471 workaround to Cortex-A57
CPU. This needs to be enabled only for revision <= r1p2 of the CPU.
- ``ERRATA_A57_859972``: This applies errata 859972 workaround to Cortex-A57
CPU. This needs to be enabled only for revision <= r1p3 of the CPU.
For Cortex-A72, following errata build flags are defined :
- ``ERRATA_A72_859971``: This applies errata 859971 workaround to Cortex-A72
CPU. This needs to be enabled only for revision <= r0p3 of the CPU.
CPU Specific optimizations
--------------------------
This section describes some of the optimizations allowed by the CPU micro
architecture that can be enabled by the platform as desired.
- ``SKIP_A57_L1_FLUSH_PWR_DWN``: This flag enables an optimization in the
Cortex-A57 cluster power down sequence by not flushing the Level 1 data
cache. The L1 data cache and the L2 unified cache are inclusive. A flush
of the L2 by set/way flushes any dirty lines from the L1 as well. This
is a known safe deviation from the Cortex-A57 TRM defined power down
sequence. Each Cortex-A57 based platform must make its own decision on
whether to use the optimization.
- ``A53_DISABLE_NON_TEMPORAL_HINT``: This flag disables the cache non-temporal
hint. The LDNP/STNP instructions as implemented on Cortex-A53 do not behave
in a way most programmers expect, and will most probably result in a
significant speed degradation to any code that employs them. The Armv8-A
architecture (see Arm DDI 0487A.h, section D3.4.3) allows cores to ignore
the non-temporal hint and treat LDNP/STNP as LDP/STP instead. Enabling this
flag enforces this behaviour. This needs to be enabled only for revisions
<= r0p3 of the CPU and is enabled by default.
- ``A57_DISABLE_NON_TEMPORAL_HINT``: This flag has the same behaviour as
``A53_DISABLE_NON_TEMPORAL_HINT`` but for Cortex-A57. This needs to be
enabled only for revisions <= r1p2 of the CPU and is enabled by default,
as recommended in section "4.7 Non-Temporal Loads/Stores" of the
`Cortex-A57 Software Optimization Guide`_.
--------------
*Copyright (c) 2014-2018, Arm Limited and Contributors. All rights reserved.*
.. _CVE-2017-5715: http://www.cve.mitre.org/cgi-bin/cvename.cgi?name=2017-5715
.. _Cortex-A53 MPCore Software Developers Errata Notice: http://infocenter.arm.com/help/topic/com.arm.doc.epm048406/Cortex_A53_MPCore_Software_Developers_Errata_Notice.pdf
.. _Cortex-A57 MPCore Software Developers Errata Notice: http://infocenter.arm.com/help/topic/com.arm.doc.epm049219/cortex_a57_mpcore_software_developers_errata_notice.pdf
.. _Cortex-A72 MPCore Software Developers Errata Notice: http://infocenter.arm.com/help/topic/com.arm.doc.epm012079/index.html
.. _Firmware Design guide: firmware-design.rst
.. _Cortex-A57 Software Optimization Guide: http://infocenter.arm.com/help/topic/com.arm.doc.uan0015b/Cortex_A57_Software_Optimization_Guide_external.pdf

View File

@@ -0,0 +1,74 @@
#
# Copyright (c) 2015-2017, ARM Limited and Contributors. All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
#
#
# This Makefile generates the image files used in the ARM Trusted Firmware
# document from the dia file.
#
# The PNG files in the present directory have been generated using Dia version
# 0.97.2, which can be obtained from https://wiki.gnome.org/Apps/Dia/Download
#
# generate_image use the tool dia generate png from dia file
# $(1) = layers
# $(2) = image file name
# $(3) = image file format
# $(4) = addition opts
# $(5) = dia source file
define generate_image
dia --show-layers=$(1) --filter=$(3) --export=$(2) $(4) $(5)
endef
RESET_DIA = reset_code_flow.dia
RESET_PNGS = \
default_reset_code.png \
reset_code_no_cpu_check.png \
reset_code_no_boot_type_check.png \
reset_code_no_checks.png \
# The $(RESET_DIA) file is organized in several layers.
# Each image is generated by combining and exporting the appropriate set of
# layers.
default_reset_code_layers = "Frontground,Background,cpu_type_check,boot_type_check"
reset_code_no_cpu_check_layers = "Frontground,Background,no_cpu_type_check,boot_type_check"
reset_code_no_boot_type_check_layers= "Frontground,Background,cpu_type_check,no_boot_type_check"
reset_code_no_checks_layers = "Frontground,Background,no_cpu_type_check,no_boot_type_check"
default_reset_code_opts =
reset_code_no_cpu_check_opts =
reset_code_no_boot_type_check_opts =
reset_code_no_checks_opts =
INT_DIA = int_handling.dia
INT_PNGS = \
sec-int-handling.png \
non-sec-int-handling.png
# The $(INT_DIA) file is organized in several layers.
# Each image is generated by combining and exporting the appropriate set of
# layers.
non-sec-int-handling_layers = "non_sec_int_bg,legend,non_sec_int_note,non_sec_int_handling"
sec-int-handling_layers = "sec_int_bg,legend,sec_int_note,sec_int_handling"
non-sec-int-handling_opts = --size=1692x
sec-int-handling_opts = --size=1570x
XLAT_DIA = xlat_align.dia
XLAT_PNG = xlat_align.png
xlat_align_layers = "bg,translations"
xlat_align_opts =
all:$(RESET_PNGS) $(INT_PNGS) $(XLAT_PNG)
$(RESET_PNGS):$(RESET_DIA)
$(call generate_image,$($(patsubst %.png,%_layers,$@)),$@,png,$($(patsubst %.png,%_opts,$@)),$<)
$(INT_PNGS):$(INT_DIA)
$(call generate_image,$($(patsubst %.png,%_layers,$@)),$@,png,$($(patsubst %.png,%_opts,$@)),$<)
$(XLAT_PNG):$(XLAT_DIA)
$(call generate_image,$($(patsubst %.png,%_layers,$@)),$(patsubst %.png,%.svg,$@),svg,$($(patsubst %.png,%_opts,$@)),$<)
inkscape -z $(patsubst %.png,%.svg,$@) -e $@ -d 45

Binary file not shown.

Before

Width:  |  Height:  |  Size: 41 KiB

After

Width:  |  Height:  |  Size: 41 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 163 KiB

After

Width:  |  Height:  |  Size: 163 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 112 KiB

After

Width:  |  Height:  |  Size: 112 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 214 KiB

After

Width:  |  Height:  |  Size: 214 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 418 KiB

After

Width:  |  Height:  |  Size: 418 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 42 KiB

After

Width:  |  Height:  |  Size: 42 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 39 KiB

After

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 38 KiB

After

Width:  |  Height:  |  Size: 38 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 76 KiB

After

Width:  |  Height:  |  Size: 76 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 169 KiB

After

Width:  |  Height:  |  Size: 169 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 34 KiB

After

Width:  |  Height:  |  Size: 34 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 33 KiB

After

Width:  |  Height:  |  Size: 33 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 46 KiB

After

Width:  |  Height:  |  Size: 46 KiB

Some files were not shown because too many files have changed in this diff Show More