Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Debug host access to HyperRAM? #381

Open
nwf opened this issue Jan 22, 2025 · 4 comments · May be fixed by #382
Open

Debug host access to HyperRAM? #381

nwf opened this issue Jan 22, 2025 · 4 comments · May be fixed by #382

Comments

@nwf
Copy link

nwf commented Jan 22, 2025

Experimenting with CHERIoT-Platform/cheriot-rtos#425, I have a build of the 01.hello_world example that works fine in the sonata simulator (as in the devcontainer, build cdeeb9a17869, from today) but fails to get off the ground on the actual board.

Some extremely tedious debugging 1 has brought me back to very early parts of the RTOS loader, specifically https://github.com/CHERIoT-Platform/cheriot-rtos/blob/52a1ba418013e1dbe55addf7b7fcf66e96bb4347/sdk/core/loader/boot.S#L45-L59. In particular, ca1 correctly ends up being a tagged capability to 0x00101d74, in SRAM, and subsequent bytes, but the word read back by clw s0, IMAGE_HEADER_LOADER_CODE_START_OFFSET(ca1) is sourced from 0x40001d74, in HyperRAM, and that's the beginning of an extremely rapid unintentional disassembly of the edifice that the loader is attempting to build.

It certainly appears as though something about the actual synthesized gateware, as opposed to the simulated gateware, is misdirecting a read intended for SRAM to HyperRAM. Of note, the instructions being executed are being fetched from HyperRAM (and apparently, from observation, correctly so).

Does that theory hold water? Would the 01.hello_world ELF and/or dump files be useful for investigation?

Footnotes

  1. In particular, this is early enough that basically nothing is initialized, but the 8 GPIO LEDs sure do work. So, 8 bits of information per recompile and code reload... let's not say "fast fluxing" of images, but I was happy to have OpenOCD working.

@marnovandermaas
Copy link
Contributor

This is very weird. The SRAM and hyperRAM mapping should be the same in simulation and FPGA. Just to double check, can you run your example on the simulator matching the commit of your bitstream?

@rmn30
Copy link

rmn30 commented Jan 22, 2025

I'm able to run hello world locally on FPGA with Sonata v1 when loaded via uf2 / USB storage driver so it might be something to do with the way @nwf is loading using OpenOCD? I'll check in with them when they are awake.

@nwf
Copy link
Author

nwf commented Jan 22, 2025

This could be on the programming side, through the debug host on the SoC, rather than on the runtime/CPU side of things. If I run...

openocd -f .../sonata-openocd-cfg.tcl \
  -c 'load_image ./sim_boot_stub 0x0' \
  -c 'load_image ./image' \
  -c 'dump_image dump_sram 0x00100000 0x10000' \
  -c 'dump_image dump_hram 0x40000000 0x10000' -c exit

with image containing things to be written to both SRAM and HyperRAM, dump_sram comes back holding a copy of the HyperRAM bits (cmp dump_sram dump_hram says they're byte-for-byte identical, and indeed, these are the bytes the ELF loads to HyperRAM.) Further experiments run:

  • Changing the order of the dump_image commands makes no difference.
  • Switching the order of the load_image commands leaves the right bits at the top of SRAM, but the rest of SRAM is filled with a copy of HyperRAM... and unfortunately dump_hram suggests that HyperRAM is also corrupted by these later writes.
  • That's true even if I use a different load_image ELF to explicitly zero SRAM first, so this isn't just residual data between openocd commands.

If I interleave loads and dumps, like this, I can see that the two purported copies of HyperRAM differ in exactly the way you'd expect given the above (for the first 146 bytes, the .text size of sim_boot_stub):

openocd -f /home/nwf/mnt/veloci-cheri/source/cheriot/lowrisc-sonata/sonata-system/util/sonata-openocd-cfg.tcl \
  -c 'load_image ./image.new' \
  -c 'dump_image dump_hram 0x40000000 0x10000' \
  -c 'load_image ./sim_boot_stub 0x0' \
  -c 'dump_image dump_hram2 0x40000000 0x10000' \
  -c exit

Is it possible that the Debug Host's writes are asserting write enables on both SRAM and HyperRAM?

FWIW, the board has the 1.0 bitstream in all three FPGA slots and the simple demo in software slot 1. I just redid that to be sure.

But yes, if I load this image via the UF2 loader (hadn't had that working locally before now), it does indeed run fine, so it's not the image's fault, I think.

ETA: is it possible to connect openocd to the Sonata simulator somehow?

@nwf
Copy link
Author

nwf commented Jan 22, 2025

H-Hey!!

dbg_host: ["sram"],

@nwf nwf changed the title Misdirected RAM accesses? Debug host access to HyperRAM? Jan 22, 2025
@nwf nwf linked a pull request Jan 22, 2025 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants