Skip to content

Add cache maintenance regs and simple API for inner cache maintenance #40

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

robamu
Copy link
Contributor

@robamu robamu commented Jun 23, 2025

still needs to be tested, and I might add the API which cleans/invalidates by virtual address as well because it is required for range based cleaning/invalidation

@robamu
Copy link
Contributor Author

robamu commented Jun 23, 2025

@jonathanpallant I wrote the API with systems in mind where the cache geometry/paramters are fixed.. But maybe it would be a good idea to be able to read these geometry parameters from the corresponding CP15 registers as well? then, an alternative API for the high level clean/invalidate API which expects these geometry parameters would probably be a good idea as well..

@robamu robamu changed the title Add cache maintenance regs Add cache maintenance regs and simple API for inner cache maintenance Jun 23, 2025
@jonathanpallant
Copy link
Contributor

I'm OK with starting out only supporting people who know precisely which chip they are running on.

@robamu robamu force-pushed the add-cache-maintenance-regs branch 2 times, most recently from 09b91d2 to b6e4468 Compare June 25, 2025 18:05
@robamu robamu force-pushed the add-cache-maintenance-regs branch 2 times, most recently from 5569820 to 4261b41 Compare July 16, 2025 14:01
@robamu robamu marked this pull request as ready for review July 16, 2025 14:01
@robamu
Copy link
Contributor Author

robamu commented Jul 16, 2025

Similarly to #39, this was used and tested this as part of a zynq7000 ethernet implementation.

@robamu
Copy link
Contributor Author

robamu commented Jul 16, 2025

It would be nice to have automated testing, but that is quite tricky..

The "easiest" way would be to use some QEMU device which uses DMA only interfacing with main memory, where cache maintenance is absolutely necessary for it to work. The https://www.qemu.org/docs/master/system/arm/xlnx-zynq.html appears to support the PrimeCell DMA, but that is a DMA where I would have to write some sort of mini compiler for their DSL.. and while the zynq seems to support ethernet, this would be a lot more complicated than a memory-to-memory DMA because now the ethernet interface needs to be mocked/checked now.

NOTE: Does QEMU even emulate the L1/L2 caches? If it does not, then this can only be tested with real hardware

@robamu robamu force-pushed the add-cache-maintenance-regs branch from 4261b41 to 399efa2 Compare July 16, 2025 14:23
@jonathanpallant
Copy link
Contributor

QEMU does JIT translation to native code, so I suspect it does not emulate cache lines.

The Arm Fixed Virtual Platform seems to simulate caches (see https://developer.arm.com/documentation/100966/1128/BaseR-Platform-FVPs/FVP-BaseR-Cortex-R52). Not free though.

Copy link
Contributor

@jonathanpallant jonathanpallant left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Whilst we may not have a simulator that perfectly emulates the cache behaviour, some example programs would be useful here.

@jonathanpallant
Copy link
Contributor

Opened #47, but this is fine for now

@jonathanpallant jonathanpallant added this pull request to the merge queue Jul 17, 2025
@github-merge-queue github-merge-queue bot removed this pull request from the merge queue due to no response for status checks Jul 17, 2025
@robamu
Copy link
Contributor Author

robamu commented Jul 17, 2025

Something went wrong with the merge bot

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants