Compare commits

...

41 Commits

Author SHA1 Message Date
Marco Carvalho
3e0d67533f Enhance Error Handling with Try-Pattern Refactoring (#6610)
* Enhance Error Handling with Try-Pattern Refactoring

* refactoring

* refactoring

* Update src/Ryujinx.HLE/FileSystem/ContentPath.cs

Co-authored-by: gdkchan <gab.dark.100@gmail.com>

---------

Co-authored-by: gdkchan <gab.dark.100@gmail.com>
2024-04-07 17:55:34 -03:00
Marco Carvalho
0b55914864 Replacing the try-catch block with null-conditional and null-coalescing operators (#6612)
* Replacing the try-catch block with null-conditional and null-coalescing operators

* repeating
2024-04-07 16:49:14 -03:00
MutantAura
451a28afb8 misc: Add ANGLE configuration option to JSON and CLI (#6520)
* Add hardware-acceleration toggle to ConfigurationState.

* Add command line override for easier RenderDoc use.

* Adjust CLI arguments.

* fix whitespace format check

* fix copypasta in comment

* Add X11 rendering mode toggle

* Remove ANGLE specific comments.
2024-04-06 14:58:02 -03:00
gdkchan
12b235700c Delete old 16KB page workarounds (#6584)
* Delete old 16KB page workarounds

* Rename Supports4KBPage to UsesPrivateAllocations

* Format whitespace

* This one should be false too

* Update XML doc
2024-04-06 13:51:44 -03:00
gdkchan
3be616207d Vulkan: Fix swapchain image view leak (#6509) 2024-04-06 13:38:52 -03:00
gdkchan
791bf22109 Vulkan: Skip draws when patches topology is used without a tessellation shader (#6508) 2024-04-06 13:25:51 -03:00
dependabot[bot]
66b1d59c66 nuget: bump DynamicData from 8.3.27 to 8.4.1 (#6536)
Bumps [DynamicData](https://github.com/reactiveui/DynamicData) from 8.3.27 to 8.4.1.
- [Release notes](https://github.com/reactiveui/DynamicData/releases)
- [Changelog](https://github.com/reactivemarbles/DynamicData/blob/main/ReleaseNotes.md)
- [Commits](https://github.com/reactiveui/DynamicData/compare/8.3.27...8.4.1)

---
updated-dependencies:
- dependency-name: DynamicData
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-06 14:07:06 +02:00
WilliamWsyHK
c8bb05633e Add mod enablement status in the log message (#6571) 2024-04-06 13:47:01 +02:00
czcx
fb1171a21e Update README.md (#6575) 2024-04-06 13:45:24 +02:00
Marco Carvalho
22c0aa9c90 "Task.Wait()" synchronously blocks, use "await" instead (#6598) 2024-04-06 13:36:18 +02:00
Ac_K
6d28b64312 ts: Migrate service to Horizon project (#6514)
* ts: Migrate service to Horizon project

This PR migrate the `ts` service (stored in `ptm`) to the Horizon project:
- It stubs all known IPCs.
- IpcServer consts are checked by RE.

Closes #6480

* Fix args
2024-04-05 15:45:43 -03:00
gdkchan
05c041feeb Ignore diacritics on game search (#6602) 2024-04-05 15:26:45 -03:00
gdkchan
8c2da1aa04 Add missing ModWindowTitle locale key (#6601)
* Add missing ModWindowTitle locale key

* Fix this while I'm at it
2024-04-05 12:54:08 -03:00
jhorv
5def0429f8 Add support to IVirtualMemoryManager for zero-copy reads (#6251)
* - WritableRegion: enable wrapping IMemoryOwner<byte>
- IVirtualMemoryManager impls of GetWritableRegion() use pooled memory when region is non-contiguous.
- IVirtualMemoryManager: add GetReadOnlySequence() and impls
- ByteMemoryPool: add new method RentCopy()
- ByteMemoryPool: make class static, remove ctor and singleton field from earlier impl

* - BytesReadOnlySequenceSegment: move from Ryujinx.Common.Memory to Ryujinx.Memory
- BytesReadOnlySequenceSegment: add IsContiguousWith() and Replace() methods
- VirtualMemoryManagerBase:
  - remove generic type parameters, instead use ulong for virtual addresses and nuint for host/physical addresses
  - implement IWritableBlock
  - add virtual GetReadOnlySequence() with coalescing of contiguous segments
  - add virtual GetSpan()
  - add virtual GetWritableRegion()
  - add abstract IsMapped()
  - add virtual MapForeign(ulong, nuint, ulong)
  - add virtual Read<T>()
  - add virtual Read(ulong, Span<byte>)
  - add virtual ReadTracked<T>()
  - add virtual SignalMemoryTracking()
  - add virtual Write()
  - add virtual Write<T>()
  - add virtual WriteUntracked()
  - add virtual WriteWithRedundancyCheck()
- VirtualMemoryManagerRefCountedBase: remove generic type parameters
- AddressSpaceManager: remove redundant methods, add required overrides
- HvMemoryManager: remove redundant methods, add required overrides, add overrides for _invalidAccessHandler handling
- MemoryManager: remove redundant methods, add required overrides, add overrides for _invalidAccessHandler handling
- MemoryManagerHostMapped: remove redundant methods, add required overrides, add overrides for _invalidAccessHandler handling
- NativeMemoryManager: add get properties for Pointer and Length
- throughout: removed invalid <inheritdoc/> comments

* make HvMemoryManager class sealed

* remove unused method

* adjust MemoryManagerHostTracked

* let MemoryManagerHostTracked override WriteImpl()
2024-04-04 22:23:03 -03:00
gdkchan
8e74fa3456 Stop clearing Modified flag on DiscardData (#6591) 2024-04-03 21:30:46 -03:00
Ac_K
6208c3e6f0 New Crowdin updates (#6550)
* New translations en_us.json (Spanish)

* New translations en_us.json (Arabic)

* New translations en_us.json (German)

* New translations en_us.json (Greek)

* New translations en_us.json (Italian)

* New translations en_us.json (Korean)

* New translations en_us.json (Polish)

* New translations en_us.json (Russian)

* New translations en_us.json (Turkish)

* New translations en_us.json (Ukrainian)

* New translations en_us.json (Chinese Traditional)

* New translations en_us.json (Portuguese, Brazilian)

* New translations en_us.json (Spanish)

* New translations en_us.json (Arabic)

* New translations en_us.json (German)

* New translations en_us.json (Greek)

* New translations en_us.json (Hebrew)

* New translations en_us.json (Italian)

* New translations en_us.json (Japanese)

* New translations en_us.json (Korean)

* New translations en_us.json (Polish)

* New translations en_us.json (Russian)

* New translations en_us.json (Turkish)

* New translations en_us.json (French)

* New translations en_us.json (Ukrainian)

* New translations en_us.json (Chinese Simplified)

* New translations en_us.json (Chinese Traditional)

* New translations en_us.json (Portuguese, Brazilian)

* New translations en_us.json (Thai)

* New translations en_us.json (Italian)

* New translations en_us.json (Japanese)

* New translations en_us.json (Polish)

* New translations en_us.json (Ukrainian)

* New translations en_us.json (Chinese Simplified)

* New translations en_us.json (Arabic)

* New translations en_us.json (Italian)

* New translations en_us.json (Russian)

* New translations en_us.json (Chinese Simplified)

* New translations en_us.json (Italian)

* New translations en_us.json (German)

* New translations en_us.json (Korean)

* New translations en_us.json (Chinese Simplified)

* New translations en_us.json (Arabic)

* New translations en_us.json (Hebrew)

* New translations en_us.json (Chinese Simplified)

* New translations en_us.json (Arabic)
2024-04-02 13:21:14 -03:00
MutantAura
7124d679fd UI: Friendly driver name reporting. (#6530)
* Implement friendly VkDriverID names for UI.

* Capitalise NVIDIA

* Prefer vendor name on macOS

* Typo fix

Co-authored-by: gdkchan <gab.dark.100@gmail.com>

---------

Co-authored-by: gdkchan <gab.dark.100@gmail.com>
2024-03-27 14:55:34 -03:00
gdkchan
b323a01738 Implement host tracked memory manager mode (#6356)
* Add host tracked memory manager mode

* Skipping flush is no longer needed

* Formatting + revert unrelated change

* LightningJit: Ensure that dest register is saved for load ops that do partial updates

* avoid allocations when doing address space lookup

Add missing improvement

* IsRmwMemory -> IsPartialRegisterUpdateMemory

* Ensure we iterate all private allocations in range

* PR feedback and potential fixes

* Simplified bridges a lot

* Skip calling SignalMappingChanged if Guest is true

* Late map bridge too

* Force address masking for prefetch instructions

* Reprotection for bridges

* Move partition list validation to separate debug method

* Move host tracked related classes to HostTracked folder

* New HostTracked namespace

* Move host tracked modes to the end of enum to avoid PPTC invalidation

---------

Co-authored-by: riperiperi <rhy3756547@hotmail.com>
2024-03-26 23:33:24 -03:00
jcm
f6d24449b6 Recreate swapchain correctly when toggling VSync (#6521)
Co-authored-by: jcm <butt@butts.com>
2024-03-26 22:54:13 -03:00
gdkchan
72bdc24db8 Disable push descriptors for Intel ARC GPUs on Windows (#6551)
* Move some init logic out of PrintGpuInformation, then delete it

* Disable push descriptors for Intel ARC on Windows

* Re-add PrintGpuInformation just to show it in the log
2024-03-26 22:27:48 -03:00
SamusAranX
43514771bf New gamecard icons (#6557)
* Replaced all gamecard images and added a blank variant

* File optimization
2024-03-23 21:33:27 +01:00
gdkchan
dbfe859ed7 Add a few missing locale strings on Avalonia (#6556)
* Add a few missing locale strings on Avalonia

* Rename LDN MitM to ldn_mitm
2024-03-23 16:31:54 -03:00
Matt Heins
c94a73ec60 Updates the default value for BufferedQuery (#6351)
AMD GPUs (possibly just RDNA 3) could hang with the previous value
until the MaxQueryRetries was hit.

Fix #6056

Co-authored-by: riperiperi <rhy3756547@hotmail.com>
2024-03-21 21:44:11 -03:00
WilliamWsyHK
20a280525f [UI] Fix Display Name Translations & Update some Chinese Translations (#6388)
* Remove incorrect additional back-slash

* Touch-up on TChinese translations

* Little touch-up on SChinese translations
2024-03-21 20:08:25 +01:00
Ac_K
75a4ea3370 New Crowdin updates (#6541)
* New translations en_us.json (French)

* New translations en_us.json (Spanish)

* New translations en_us.json (Arabic)

* New translations en_us.json (German)

* New translations en_us.json (Greek)

* New translations en_us.json (Hebrew)

* New translations en_us.json (Italian)

* New translations en_us.json (Japanese)

* New translations en_us.json (Korean)

* New translations en_us.json (Polish)

* New translations en_us.json (Russian)

* New translations en_us.json (Turkish)

* New translations en_us.json (Ukrainian)

* New translations en_us.json (Chinese Simplified)

* New translations en_us.json (Chinese Traditional)

* New translations en_us.json (Portuguese, Brazilian)

* New translations en_us.json (Thai)

* Add missing Thai language name

* Add new languages

* Enable RTL for Arabic

---------

Co-authored-by: gdkchan <gab.dark.100@gmail.com>
2024-03-20 21:07:27 -03:00
dependabot[bot]
d26ef2eec3 nuget: bump Microsoft.CodeAnalysis.CSharp from 4.8.0 to 4.9.2 (#6397)
Bumps [Microsoft.CodeAnalysis.CSharp](https://github.com/dotnet/roslyn) from 4.8.0 to 4.9.2.
- [Release notes](https://github.com/dotnet/roslyn/releases)
- [Changelog](https://github.com/dotnet/roslyn/blob/main/docs/Breaking%20API%20Changes.md)
- [Commits](https://github.com/dotnet/roslyn/commits)

---
updated-dependencies:
- dependency-name: Microsoft.CodeAnalysis.CSharp
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-03-17 02:55:11 +01:00
Isaac Marovitz
a0552fd78b Ava UI: Fix locale crash (#6385)
* Fix locale crash

* Apply suggestions from code review

---------

Co-authored-by: Ac_K <Acoustik666@gmail.com>
2024-03-17 02:27:14 +01:00
Isaac Marovitz
bb8c5ebae1 Ava UI: Content Dialog Fixes (#6482)
* Don’t use ContentDialogHelper when not necessary

* Remove `ExtendClientAreaToDecorationsHint`
2024-03-16 20:34:26 +01:00
dependabot[bot]
50bdda5baa nuget: bump Microsoft.IdentityModel.JsonWebTokens from 7.3.0 to 7.4.0 (#6366)
Bumps [Microsoft.IdentityModel.JsonWebTokens](https://github.com/AzureAD/azure-activedirectory-identitymodel-extensions-for-dotnet) from 7.3.0 to 7.4.0.
- [Release notes](https://github.com/AzureAD/azure-activedirectory-identitymodel-extensions-for-dotnet/releases)
- [Changelog](https://github.com/AzureAD/azure-activedirectory-identitymodel-extensions-for-dotnet/blob/dev/CHANGELOG.md)
- [Commits](https://github.com/AzureAD/azure-activedirectory-identitymodel-extensions-for-dotnet/compare/7.3.0...v7.4.0)

---
updated-dependencies:
- dependency-name: Microsoft.IdentityModel.JsonWebTokens
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-03-16 20:13:30 +01:00
dependabot[bot]
18df25f66f nuget: bump the avalonia group with 2 updates (#6505)
Bumps the avalonia group with 2 updates: [Avalonia.Svg](https://github.com/wieslawsoltes/Svg.Skia) and [Avalonia.Svg.Skia](https://github.com/wieslawsoltes/Svg.Skia).


Updates `Avalonia.Svg` from 11.0.0.14 to 11.0.0.16
- [Release notes](https://github.com/wieslawsoltes/Svg.Skia/releases)
- [Changelog](https://github.com/wieslawsoltes/Svg.Skia/blob/master/CHANGELOG.md)
- [Commits](https://github.com/wieslawsoltes/Svg.Skia/commits)

Updates `Avalonia.Svg.Skia` from 11.0.0.14 to 11.0.0.16
- [Release notes](https://github.com/wieslawsoltes/Svg.Skia/releases)
- [Changelog](https://github.com/wieslawsoltes/Svg.Skia/blob/master/CHANGELOG.md)
- [Commits](https://github.com/wieslawsoltes/Svg.Skia/commits)

---
updated-dependencies:
- dependency-name: Avalonia.Svg
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: avalonia
- dependency-name: Avalonia.Svg.Skia
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: avalonia
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-03-16 20:02:18 +01:00
standstaff
e19e7622a3 chore: remove repetitive words (#6500)
Signed-off-by: standstaff <zhengxingru@yeah.net>
2024-03-16 19:49:54 +01:00
Isaac Marovitz
26026d1357 Fix Title Update Manager not refreshing app list (#6507) 2024-03-16 19:46:03 +01:00
TSRBerry
24068b023c discord: Update ApplicationID (#6513) 2024-03-16 19:41:38 +01:00
riperiperi
1217a8e69b GPU: Rebind RTs if scale changes when binding textures (#6493)
This fixes a longstanding issue with resolution scale that could result in flickering graphics, typically the first frame something is drawn, or on camera cuts in cutscenes.

The root cause of the issue is that texture scale can be changed when binding textures or images. This typically happens because a texture becomes a view of a larger texture, such as a 400x225 texture becoming a view of a 800x450 texture with two levels. If the 400x225 texture is bound as a render target and has state [1x Undesired], but the storage texture is [2x Scaled], the render target texture's scale is changed to [2x Scaled] to match its new storage. This means the scale changed after the render target state was processed...

This can cause a number of issues. When render target state is processed, texture scales are examined and potentially changed so that they are all the same value. If one texture is scaled, all textures must be. If one texture is blacklisted from scaling, all of them must be. This results in a single resolution scale value being assigned to the TextureManager, which also scales the scissor and viewport values.

If the scale is chosen as 1x, and a later texture binding changes one of the textures to be 2x, the scale in TextureManager no longer matches all of the bound textures. What's worse, the scales in these textures could mismatch entirely. This typically results in the support buffer scale, viewport and scissor being wrong for at least one of the bound render targets.

This PR fixes the issue by re-evaluating render target state if any scale mismatches the expected scale after texture bindings happen. This can actually cause scale to change again, so it must loop back to perform texture bindings again. This can happen as many times as it needs to, but I don't expect it to happen more than once. Problematic bindings will just result in a blacklist, which will propagate to other bound targets.
2024-03-14 19:59:09 -03:00
gdkchan
732db7581f Consider Polygon as unsupported is triangle fans are unsupported on Vulkan (#6490) 2024-03-14 19:46:57 -03:00
riperiperi
fdd3263e31 Separate guest/host tracking + unaligned protection (#6486)
* WIP: Separate guest/host tracking + unaligned protection

Allow memory manager to define support for single byte guest tracking

* Formatting

* Improve docs

* Properly handle cases where the address space bits are too low

* Address feedback
2024-03-14 19:38:27 -03:00
Isaac Marovitz
ce607db944 Ava UI: Update Ava (#6430)
* Update Ava

* Newline
2024-03-14 02:29:13 +01:00
TSRBerry
6b4ee82e5d infra: Fix updater for old Ava users (#6441)
* Add binaries with both names to release archives

* Add migration code for the new filename

* Add Ryujinx.Ava to all win/linux releases for a while
2024-03-13 23:26:35 +01:00
Keaton
e2a655f1a4 Update AutoDeleteCache.cs (#6471)
Increase the texture cache limit from 512 MB to 1 GB.
2024-03-13 17:39:39 -03:00
Nicolas Abram
d9a18919b0 Fix geometry shader passthrough issue (#6462)
* Fix geometry shader passthrough issue (Diagnosed by gdkchan)

* Fix whitespace formatting

* Fix whitespace formatting

* Bump shader cache version

* Don't apply PassthroughNV decorations to output geometry shader variables
2024-03-13 17:26:19 -03:00
Emmanuel Hansen
8354434a37 Passthrough mouse for win32 (#6450)
* passthrough mouse for win32

* remove unused enums
2024-03-11 21:21:39 -03:00
136 changed files with 8057 additions and 4205 deletions

View File

@@ -71,7 +71,7 @@ jobs:
- uses: actions/setup-dotnet@v4
with:
global-json-file: global.json
- name: Overwrite csc problem matcher
run: echo "::add-matcher::.github/csc.json"
@@ -104,37 +104,30 @@ jobs:
if: matrix.platform.os == 'windows-latest'
run: |
pushd publish_ava
cp publish/Ryujinx.exe publish/Ryujinx.Ava.exe
7z a ../release_output/ryujinx-${{ steps.version_info.outputs.build_version }}-${{ matrix.platform.zip_os_name }}.zip publish
7z a ../release_output/test-ava-ryujinx-${{ steps.version_info.outputs.build_version }}-${{ matrix.platform.zip_os_name }}.zip publish
popd
pushd publish_sdl2_headless
7z a ../release_output/sdl2-ryujinx-headless-${{ steps.version_info.outputs.build_version }}-${{ matrix.platform.zip_os_name }}.zip publish
popd
pushd publish_ava
mv publish/Ryujinx.exe publish/Ryujinx.Ava.exe
7z a ../release_output/test-ava-ryujinx-${{ steps.version_info.outputs.build_version }}-${{ matrix.platform.zip_os_name }}.zip publish
popd
shell: bash
- name: Packing Linux builds
if: matrix.platform.os == 'ubuntu-latest'
run: |
pushd publish_ava
chmod +x publish/Ryujinx.sh publish/Ryujinx
cp publish/Ryujinx publish/Ryujinx.Ava
chmod +x publish/Ryujinx.sh publish/Ryujinx.Ava publish/Ryujinx
tar -czvf ../release_output/ryujinx-${{ steps.version_info.outputs.build_version }}-${{ matrix.platform.zip_os_name }}.tar.gz publish
tar -czvf ../release_output/test-ava-ryujinx-${{ steps.version_info.outputs.build_version }}-${{ matrix.platform.zip_os_name }}.tar.gz publish
popd
pushd publish_sdl2_headless
chmod +x publish/Ryujinx.sh publish/Ryujinx.Headless.SDL2
tar -czvf ../release_output/sdl2-ryujinx-headless-${{ steps.version_info.outputs.build_version }}-${{ matrix.platform.zip_os_name }}.tar.gz publish
popd
pushd publish_ava
mv publish/Ryujinx publish/Ryujinx.Ava
chmod +x publish/Ryujinx.sh publish/Ryujinx.Ava
tar -czvf ../release_output/test-ava-ryujinx-${{ steps.version_info.outputs.build_version }}-${{ matrix.platform.zip_os_name }}.tar.gz publish
popd
shell: bash
- name: Pushing new release

View File

@@ -3,24 +3,24 @@
<ManagePackageVersionsCentrally>true</ManagePackageVersionsCentrally>
</PropertyGroup>
<ItemGroup>
<PackageVersion Include="Avalonia" Version="11.0.7" />
<PackageVersion Include="Avalonia.Controls.DataGrid" Version="11.0.7" />
<PackageVersion Include="Avalonia.Desktop" Version="11.0.7" />
<PackageVersion Include="Avalonia.Diagnostics" Version="11.0.7" />
<PackageVersion Include="Avalonia.Markup.Xaml.Loader" Version="11.0.7" />
<PackageVersion Include="Avalonia.Svg" Version="11.0.0.13" />
<PackageVersion Include="Avalonia.Svg.Skia" Version="11.0.0.13" />
<PackageVersion Include="Avalonia" Version="11.0.10" />
<PackageVersion Include="Avalonia.Controls.DataGrid" Version="11.0.10" />
<PackageVersion Include="Avalonia.Desktop" Version="11.0.10" />
<PackageVersion Include="Avalonia.Diagnostics" Version="11.0.10" />
<PackageVersion Include="Avalonia.Markup.Xaml.Loader" Version="11.0.10" />
<PackageVersion Include="Avalonia.Svg" Version="11.0.0.16" />
<PackageVersion Include="Avalonia.Svg.Skia" Version="11.0.0.16" />
<PackageVersion Include="CommandLineParser" Version="2.9.1" />
<PackageVersion Include="Concentus" Version="1.1.7" />
<PackageVersion Include="DiscordRichPresence" Version="1.2.1.24" />
<PackageVersion Include="DynamicData" Version="8.3.27" />
<PackageVersion Include="DynamicData" Version="8.4.1" />
<PackageVersion Include="FluentAvaloniaUI" Version="2.0.5" />
<PackageVersion Include="GtkSharp.Dependencies" Version="1.1.1" />
<PackageVersion Include="GtkSharp.Dependencies.osx" Version="0.0.5" />
<PackageVersion Include="LibHac" Version="0.19.0" />
<PackageVersion Include="Microsoft.CodeAnalysis.Analyzers" Version="3.3.4" />
<PackageVersion Include="Microsoft.CodeAnalysis.CSharp" Version="4.8.0" />
<PackageVersion Include="Microsoft.IdentityModel.JsonWebTokens" Version="7.3.0" />
<PackageVersion Include="Microsoft.CodeAnalysis.CSharp" Version="4.9.2" />
<PackageVersion Include="Microsoft.IdentityModel.JsonWebTokens" Version="7.4.0" />
<PackageVersion Include="Microsoft.NET.Test.Sdk" Version="17.9.0" />
<PackageVersion Include="Microsoft.IO.RecyclableMemoryStream" Version="3.0.0" />
<PackageVersion Include="MsgPack.Cli" Version="1.0.1" />
@@ -45,7 +45,6 @@
<PackageVersion Include="SixLabors.ImageSharp" Version="2.1.7" />
<PackageVersion Include="SixLabors.ImageSharp.Drawing" Version="1.0.0" />
<PackageVersion Include="SPB" Version="0.0.4-build32" />
<PackageVersion Include="System.Drawing.Common" Version="8.0.2" />
<PackageVersion Include="System.IO.Hashing" Version="8.0.0" />
<PackageVersion Include="System.Management" Version="8.0.0" />
<PackageVersion Include="UnicornEngine.Unicorn" Version="2.0.2-rc1-fb78016" />

View File

@@ -33,8 +33,3 @@ Project Docs
=================
To be added. Many project files will contain basic XML docs for key functions and classes in the meantime.
Other Information
=================
- N/A

View File

@@ -157,7 +157,7 @@ namespace ARMeilleure.Instructions
context.Copy(temp, value);
if (!context.Memory.Type.IsHostMapped())
if (!context.Memory.Type.IsHostMappedOrTracked())
{
context.Branch(lblEnd);
@@ -198,7 +198,7 @@ namespace ARMeilleure.Instructions
SetInt(context, rt, value);
if (!context.Memory.Type.IsHostMapped())
if (!context.Memory.Type.IsHostMappedOrTracked())
{
context.Branch(lblEnd);
@@ -265,7 +265,7 @@ namespace ARMeilleure.Instructions
context.Copy(GetVec(rt), value);
if (!context.Memory.Type.IsHostMapped())
if (!context.Memory.Type.IsHostMappedOrTracked())
{
context.Branch(lblEnd);
@@ -312,7 +312,7 @@ namespace ARMeilleure.Instructions
break;
}
if (!context.Memory.Type.IsHostMapped())
if (!context.Memory.Type.IsHostMappedOrTracked())
{
context.Branch(lblEnd);
@@ -385,7 +385,7 @@ namespace ARMeilleure.Instructions
break;
}
if (!context.Memory.Type.IsHostMapped())
if (!context.Memory.Type.IsHostMappedOrTracked())
{
context.Branch(lblEnd);
@@ -403,6 +403,27 @@ namespace ARMeilleure.Instructions
{
return EmitHostMappedPointer(context, address);
}
else if (context.Memory.Type.IsHostTracked())
{
if (address.Type == OperandType.I32)
{
address = context.ZeroExtend32(OperandType.I64, address);
}
if (context.Memory.Type == MemoryManagerType.HostTracked)
{
Operand mask = Const(ulong.MaxValue >> (64 - context.Memory.AddressSpaceBits));
address = context.BitwiseAnd(address, mask);
}
Operand ptBase = !context.HasPtc
? Const(context.Memory.PageTablePointer.ToInt64())
: Const(context.Memory.PageTablePointer.ToInt64(), Ptc.PageTableSymbol);
Operand ptOffset = context.ShiftRightUI(address, Const(PageBits));
return context.Add(address, context.Load(OperandType.I64, context.Add(ptBase, context.ShiftLeft(ptOffset, Const(3)))));
}
int ptLevelBits = context.Memory.AddressSpaceBits - PageBits;
int ptLevelSize = 1 << ptLevelBits;

View File

@@ -91,54 +91,54 @@ namespace ARMeilleure.Instructions
#region "Read"
public static byte ReadByte(ulong address)
{
return GetMemoryManager().ReadTracked<byte>(address);
return GetMemoryManager().ReadGuest<byte>(address);
}
public static ushort ReadUInt16(ulong address)
{
return GetMemoryManager().ReadTracked<ushort>(address);
return GetMemoryManager().ReadGuest<ushort>(address);
}
public static uint ReadUInt32(ulong address)
{
return GetMemoryManager().ReadTracked<uint>(address);
return GetMemoryManager().ReadGuest<uint>(address);
}
public static ulong ReadUInt64(ulong address)
{
return GetMemoryManager().ReadTracked<ulong>(address);
return GetMemoryManager().ReadGuest<ulong>(address);
}
public static V128 ReadVector128(ulong address)
{
return GetMemoryManager().ReadTracked<V128>(address);
return GetMemoryManager().ReadGuest<V128>(address);
}
#endregion
#region "Write"
public static void WriteByte(ulong address, byte value)
{
GetMemoryManager().Write(address, value);
GetMemoryManager().WriteGuest(address, value);
}
public static void WriteUInt16(ulong address, ushort value)
{
GetMemoryManager().Write(address, value);
GetMemoryManager().WriteGuest(address, value);
}
public static void WriteUInt32(ulong address, uint value)
{
GetMemoryManager().Write(address, value);
GetMemoryManager().WriteGuest(address, value);
}
public static void WriteUInt64(ulong address, ulong value)
{
GetMemoryManager().Write(address, value);
GetMemoryManager().WriteGuest(address, value);
}
public static void WriteVector128(ulong address, V128 value)
{
GetMemoryManager().Write(address, value);
GetMemoryManager().WriteGuest(address, value);
}
#endregion

View File

@@ -28,6 +28,17 @@ namespace ARMeilleure.Memory
/// <returns>The data</returns>
T ReadTracked<T>(ulong va) where T : unmanaged;
/// <summary>
/// Reads data from CPU mapped memory, from guest code. (with read tracking)
/// </summary>
/// <typeparam name="T">Type of the data being read</typeparam>
/// <param name="va">Virtual address of the data in memory</param>
/// <returns>The data</returns>
T ReadGuest<T>(ulong va) where T : unmanaged
{
return ReadTracked<T>(va);
}
/// <summary>
/// Writes data to CPU mapped memory.
/// </summary>
@@ -36,6 +47,17 @@ namespace ARMeilleure.Memory
/// <param name="value">Data to be written</param>
void Write<T>(ulong va, T value) where T : unmanaged;
/// <summary>
/// Writes data to CPU mapped memory, from guest code.
/// </summary>
/// <typeparam name="T">Type of the data being written</typeparam>
/// <param name="va">Virtual address to write the data into</param>
/// <param name="value">Data to be written</param>
void WriteGuest<T>(ulong va, T value) where T : unmanaged
{
Write(va, value);
}
/// <summary>
/// Gets a read-only span of data from CPU mapped memory.
/// </summary>

View File

@@ -29,6 +29,18 @@ namespace ARMeilleure.Memory
/// Allows invalid access from JIT code to the rest of the program, but is faster.
/// </summary>
HostMappedUnsafe,
/// <summary>
/// High level implementation using a software flat page table for address translation
/// with no support for handling invalid or non-contiguous memory access.
/// </summary>
HostTracked,
/// <summary>
/// High level implementation using a software flat page table for address translation
/// without masking the address and no support for handling invalid or non-contiguous memory access.
/// </summary>
HostTrackedUnsafe,
}
public static class MemoryManagerTypeExtensions
@@ -37,5 +49,15 @@ namespace ARMeilleure.Memory
{
return type == MemoryManagerType.HostMapped || type == MemoryManagerType.HostMappedUnsafe;
}
public static bool IsHostTracked(this MemoryManagerType type)
{
return type == MemoryManagerType.HostTracked || type == MemoryManagerType.HostTrackedUnsafe;
}
public static bool IsHostMappedOrTracked(this MemoryManagerType type)
{
return type.IsHostMapped() || type.IsHostTracked();
}
}
}

View File

@@ -21,10 +21,8 @@ namespace ARMeilleure.Signal
private const uint EXCEPTION_ACCESS_VIOLATION = 0xc0000005;
private static Operand EmitGenericRegionCheck(EmitterContext context, IntPtr signalStructPtr, Operand faultAddress, Operand isWrite, int rangeStructSize, ulong pageSize)
private static Operand EmitGenericRegionCheck(EmitterContext context, IntPtr signalStructPtr, Operand faultAddress, Operand isWrite, int rangeStructSize)
{
ulong pageMask = pageSize - 1;
Operand inRegionLocal = context.AllocateLocal(OperandType.I32);
context.Copy(inRegionLocal, Const(0));
@@ -51,7 +49,7 @@ namespace ARMeilleure.Signal
// Only call tracking if in range.
context.BranchIfFalse(nextLabel, inRange, BasicBlockFrequency.Cold);
Operand offset = context.BitwiseAnd(context.Subtract(faultAddress, rangeAddress), Const(~pageMask));
Operand offset = context.Subtract(faultAddress, rangeAddress);
// Call the tracking action, with the pointer's relative offset to the base address.
Operand trackingActionPtr = context.Load(OperandType.I64, Const((ulong)signalStructPtr + rangeBaseOffset + 20));
@@ -62,8 +60,10 @@ namespace ARMeilleure.Signal
// Tracking action should be non-null to call it, otherwise assume false return.
context.BranchIfFalse(skipActionLabel, trackingActionPtr);
Operand result = context.Call(trackingActionPtr, OperandType.I32, offset, Const(pageSize), isWrite);
context.Copy(inRegionLocal, result);
Operand result = context.Call(trackingActionPtr, OperandType.I64, offset, Const(1UL), isWrite);
context.Copy(inRegionLocal, context.ICompareNotEqual(result, Const(0UL)));
GenerateFaultAddressPatchCode(context, faultAddress, result);
context.MarkLabel(skipActionLabel);
@@ -155,7 +155,7 @@ namespace ARMeilleure.Signal
throw new PlatformNotSupportedException();
}
public static byte[] GenerateUnixSignalHandler(IntPtr signalStructPtr, int rangeStructSize, ulong pageSize)
public static byte[] GenerateUnixSignalHandler(IntPtr signalStructPtr, int rangeStructSize)
{
EmitterContext context = new();
@@ -168,7 +168,7 @@ namespace ARMeilleure.Signal
Operand isWrite = context.ICompareNotEqual(writeFlag, Const(0L)); // Normalize to 0/1.
Operand isInRegion = EmitGenericRegionCheck(context, signalStructPtr, faultAddress, isWrite, rangeStructSize, pageSize);
Operand isInRegion = EmitGenericRegionCheck(context, signalStructPtr, faultAddress, isWrite, rangeStructSize);
Operand endLabel = Label();
@@ -203,7 +203,7 @@ namespace ARMeilleure.Signal
return Compiler.Compile(cfg, argTypes, OperandType.None, CompilerOptions.HighCq, RuntimeInformation.ProcessArchitecture).Code;
}
public static byte[] GenerateWindowsSignalHandler(IntPtr signalStructPtr, int rangeStructSize, ulong pageSize)
public static byte[] GenerateWindowsSignalHandler(IntPtr signalStructPtr, int rangeStructSize)
{
EmitterContext context = new();
@@ -232,7 +232,7 @@ namespace ARMeilleure.Signal
Operand isWrite = context.ICompareNotEqual(writeFlag, Const(0L)); // Normalize to 0/1.
Operand isInRegion = EmitGenericRegionCheck(context, signalStructPtr, faultAddress, isWrite, rangeStructSize, pageSize);
Operand isInRegion = EmitGenericRegionCheck(context, signalStructPtr, faultAddress, isWrite, rangeStructSize);
Operand endLabel = Label();
@@ -256,5 +256,86 @@ namespace ARMeilleure.Signal
return Compiler.Compile(cfg, argTypes, OperandType.I32, CompilerOptions.HighCq, RuntimeInformation.ProcessArchitecture).Code;
}
private static void GenerateFaultAddressPatchCode(EmitterContext context, Operand faultAddress, Operand newAddress)
{
if (RuntimeInformation.ProcessArchitecture == Architecture.Arm64)
{
if (SupportsFaultAddressPatchingForHostOs())
{
Operand lblSkip = Label();
context.BranchIf(lblSkip, faultAddress, newAddress, Comparison.Equal);
Operand ucontextPtr = context.LoadArgument(OperandType.I64, 2);
Operand pcCtxAddress = default;
ulong baseRegsOffset = 0;
if (OperatingSystem.IsLinux())
{
pcCtxAddress = context.Add(ucontextPtr, Const(440UL));
baseRegsOffset = 184UL;
}
else if (OperatingSystem.IsMacOS() || OperatingSystem.IsIOS())
{
ucontextPtr = context.Load(OperandType.I64, context.Add(ucontextPtr, Const(48UL)));
pcCtxAddress = context.Add(ucontextPtr, Const(272UL));
baseRegsOffset = 16UL;
}
Operand pc = context.Load(OperandType.I64, pcCtxAddress);
Operand reg = GetAddressRegisterFromArm64Instruction(context, pc);
Operand reg64 = context.ZeroExtend32(OperandType.I64, reg);
Operand regCtxAddress = context.Add(ucontextPtr, context.Add(context.ShiftLeft(reg64, Const(3)), Const(baseRegsOffset)));
Operand regAddress = context.Load(OperandType.I64, regCtxAddress);
Operand addressDelta = context.Subtract(regAddress, faultAddress);
context.Store(regCtxAddress, context.Add(newAddress, addressDelta));
context.MarkLabel(lblSkip);
}
}
}
private static Operand GetAddressRegisterFromArm64Instruction(EmitterContext context, Operand pc)
{
Operand inst = context.Load(OperandType.I32, pc);
Operand reg = context.AllocateLocal(OperandType.I32);
Operand isSysInst = context.ICompareEqual(context.BitwiseAnd(inst, Const(0xFFF80000)), Const(0xD5080000));
Operand lblSys = Label();
Operand lblEnd = Label();
context.BranchIfTrue(lblSys, isSysInst, BasicBlockFrequency.Cold);
context.Copy(reg, context.BitwiseAnd(context.ShiftRightUI(inst, Const(5)), Const(0x1F)));
context.Branch(lblEnd);
context.MarkLabel(lblSys);
context.Copy(reg, context.BitwiseAnd(inst, Const(0x1F)));
context.MarkLabel(lblEnd);
return reg;
}
public static bool SupportsFaultAddressPatchingForHost()
{
return SupportsFaultAddressPatchingForHostArch() && SupportsFaultAddressPatchingForHostOs();
}
private static bool SupportsFaultAddressPatchingForHostArch()
{
return RuntimeInformation.ProcessArchitecture == Architecture.Arm64;
}
private static bool SupportsFaultAddressPatchingForHostOs()
{
return OperatingSystem.IsLinux() || OperatingSystem.IsMacOS() || OperatingSystem.IsIOS();
}
}
}

View File

@@ -5,10 +5,10 @@ namespace Ryujinx.Common.Collections
/// </summary>
public class IntrusiveRedBlackTreeNode<T> where T : IntrusiveRedBlackTreeNode<T>
{
internal bool Color = true;
internal T Left;
internal T Right;
internal T Parent;
public bool Color = true;
public T Left;
public T Right;
public T Parent;
public T Predecessor => IntrusiveRedBlackTreeImpl<T>.PredecessorOf((T)this);
public T Successor => IntrusiveRedBlackTreeImpl<T>.SuccessorOf((T)this);

View File

@@ -4,7 +4,7 @@ using System.Threading;
namespace Ryujinx.Common.Memory
{
public sealed partial class ByteMemoryPool
public partial class ByteMemoryPool
{
/// <summary>
/// Represents a <see cref="IMemoryOwner{Byte}"/> that wraps an array rented from

View File

@@ -6,24 +6,8 @@ namespace Ryujinx.Common.Memory
/// <summary>
/// Provides a pool of re-usable byte array instances.
/// </summary>
public sealed partial class ByteMemoryPool
public static partial class ByteMemoryPool
{
private static readonly ByteMemoryPool _shared = new();
/// <summary>
/// Constructs a <see cref="ByteMemoryPool"/> instance. Private to force access through
/// the <see cref="ByteMemoryPool.Shared"/> instance.
/// </summary>
private ByteMemoryPool()
{
// No implementation
}
/// <summary>
/// Retrieves a shared <see cref="ByteMemoryPool"/> instance.
/// </summary>
public static ByteMemoryPool Shared => _shared;
/// <summary>
/// Returns the maximum buffer size supported by this pool.
/// </summary>
@@ -95,6 +79,20 @@ namespace Ryujinx.Common.Memory
return buffer;
}
/// <summary>
/// Copies <paramref name="buffer"/> into a newly rented byte memory buffer.
/// </summary>
/// <param name="buffer">The byte buffer to copy</param>
/// <returns>A <see cref="IMemoryOwner{Byte}"/> wrapping the rented memory with <paramref name="buffer"/> copied to it</returns>
public static IMemoryOwner<byte> RentCopy(ReadOnlySpan<byte> buffer)
{
var copy = RentImpl(buffer.Length);
buffer.CopyTo(copy.Memory.Span);
return copy;
}
private static ByteMemoryPoolBuffer RentImpl(int length)
{
if ((uint)length > Array.MaxLength)

View File

@@ -1,5 +1,3 @@
using Ryujinx.Common;
using Ryujinx.Common.Collections;
using Ryujinx.Memory;
using System;
@@ -7,175 +5,23 @@ namespace Ryujinx.Cpu
{
public class AddressSpace : IDisposable
{
private const int DefaultBlockAlignment = 1 << 20;
private enum MappingType : byte
{
None,
Private,
Shared,
}
private class Mapping : IntrusiveRedBlackTreeNode<Mapping>, IComparable<Mapping>
{
public ulong Address { get; private set; }
public ulong Size { get; private set; }
public ulong EndAddress => Address + Size;
public MappingType Type { get; private set; }
public Mapping(ulong address, ulong size, MappingType type)
{
Address = address;
Size = size;
Type = type;
}
public Mapping Split(ulong splitAddress)
{
ulong leftSize = splitAddress - Address;
ulong rightSize = EndAddress - splitAddress;
Mapping left = new(Address, leftSize, Type);
Address = splitAddress;
Size = rightSize;
return left;
}
public void UpdateState(MappingType newType)
{
Type = newType;
}
public void Extend(ulong sizeDelta)
{
Size += sizeDelta;
}
public int CompareTo(Mapping other)
{
if (Address < other.Address)
{
return -1;
}
else if (Address <= other.EndAddress - 1UL)
{
return 0;
}
else
{
return 1;
}
}
}
private class PrivateMapping : IntrusiveRedBlackTreeNode<PrivateMapping>, IComparable<PrivateMapping>
{
public ulong Address { get; private set; }
public ulong Size { get; private set; }
public ulong EndAddress => Address + Size;
public PrivateMemoryAllocation PrivateAllocation { get; private set; }
public PrivateMapping(ulong address, ulong size, PrivateMemoryAllocation privateAllocation)
{
Address = address;
Size = size;
PrivateAllocation = privateAllocation;
}
public PrivateMapping Split(ulong splitAddress)
{
ulong leftSize = splitAddress - Address;
ulong rightSize = EndAddress - splitAddress;
(var leftAllocation, PrivateAllocation) = PrivateAllocation.Split(leftSize);
PrivateMapping left = new(Address, leftSize, leftAllocation);
Address = splitAddress;
Size = rightSize;
return left;
}
public void Map(MemoryBlock baseBlock, MemoryBlock mirrorBlock, PrivateMemoryAllocation newAllocation)
{
baseBlock.MapView(newAllocation.Memory, newAllocation.Offset, Address, Size);
mirrorBlock.MapView(newAllocation.Memory, newAllocation.Offset, Address, Size);
PrivateAllocation = newAllocation;
}
public void Unmap(MemoryBlock baseBlock, MemoryBlock mirrorBlock)
{
if (PrivateAllocation.IsValid)
{
baseBlock.UnmapView(PrivateAllocation.Memory, Address, Size);
mirrorBlock.UnmapView(PrivateAllocation.Memory, Address, Size);
PrivateAllocation.Dispose();
}
PrivateAllocation = default;
}
public void Extend(ulong sizeDelta)
{
Size += sizeDelta;
}
public int CompareTo(PrivateMapping other)
{
if (Address < other.Address)
{
return -1;
}
else if (Address <= other.EndAddress - 1UL)
{
return 0;
}
else
{
return 1;
}
}
}
private readonly MemoryBlock _backingMemory;
private readonly PrivateMemoryAllocator _privateMemoryAllocator;
private readonly IntrusiveRedBlackTree<Mapping> _mappingTree;
private readonly IntrusiveRedBlackTree<PrivateMapping> _privateTree;
private readonly object _treeLock;
private readonly bool _supports4KBPages;
public MemoryBlock Base { get; }
public MemoryBlock Mirror { get; }
public ulong AddressSpaceSize { get; }
public AddressSpace(MemoryBlock backingMemory, MemoryBlock baseMemory, MemoryBlock mirrorMemory, ulong addressSpaceSize, bool supports4KBPages)
public AddressSpace(MemoryBlock backingMemory, MemoryBlock baseMemory, MemoryBlock mirrorMemory, ulong addressSpaceSize)
{
if (!supports4KBPages)
{
_privateMemoryAllocator = new PrivateMemoryAllocator(DefaultBlockAlignment, MemoryAllocationFlags.Mirrorable | MemoryAllocationFlags.NoMap);
_mappingTree = new IntrusiveRedBlackTree<Mapping>();
_privateTree = new IntrusiveRedBlackTree<PrivateMapping>();
_treeLock = new object();
_mappingTree.Add(new Mapping(0UL, addressSpaceSize, MappingType.None));
_privateTree.Add(new PrivateMapping(0UL, addressSpaceSize, default));
}
_backingMemory = backingMemory;
_supports4KBPages = supports4KBPages;
Base = baseMemory;
Mirror = mirrorMemory;
AddressSpaceSize = addressSpaceSize;
}
public static bool TryCreate(MemoryBlock backingMemory, ulong asSize, bool supports4KBPages, out AddressSpace addressSpace)
public static bool TryCreate(MemoryBlock backingMemory, ulong asSize, out AddressSpace addressSpace)
{
addressSpace = null;
@@ -193,7 +39,7 @@ namespace Ryujinx.Cpu
{
baseMemory = new MemoryBlock(addressSpaceSize, AsFlags);
mirrorMemory = new MemoryBlock(addressSpaceSize, AsFlags);
addressSpace = new AddressSpace(backingMemory, baseMemory, mirrorMemory, addressSpaceSize, supports4KBPages);
addressSpace = new AddressSpace(backingMemory, baseMemory, mirrorMemory, addressSpaceSize);
break;
}
@@ -209,289 +55,20 @@ namespace Ryujinx.Cpu
public void Map(ulong va, ulong pa, ulong size, MemoryMapFlags flags)
{
if (_supports4KBPages)
{
Base.MapView(_backingMemory, pa, va, size);
Mirror.MapView(_backingMemory, pa, va, size);
return;
}
lock (_treeLock)
{
ulong alignment = MemoryBlock.GetPageSize();
bool isAligned = ((va | pa | size) & (alignment - 1)) == 0;
if (flags.HasFlag(MemoryMapFlags.Private) && !isAligned)
{
Update(va, pa, size, MappingType.Private);
}
else
{
// The update method assumes that shared mappings are already aligned.
if (!flags.HasFlag(MemoryMapFlags.Private))
{
if ((va & (alignment - 1)) != (pa & (alignment - 1)))
{
throw new InvalidMemoryRegionException($"Virtual address 0x{va:X} and physical address 0x{pa:X} are misaligned and can't be aligned.");
}
ulong endAddress = va + size;
va = BitUtils.AlignDown(va, alignment);
pa = BitUtils.AlignDown(pa, alignment);
size = BitUtils.AlignUp(endAddress, alignment) - va;
}
Update(va, pa, size, MappingType.Shared);
}
}
Base.MapView(_backingMemory, pa, va, size);
Mirror.MapView(_backingMemory, pa, va, size);
}
public void Unmap(ulong va, ulong size)
{
if (_supports4KBPages)
{
Base.UnmapView(_backingMemory, va, size);
Mirror.UnmapView(_backingMemory, va, size);
return;
}
lock (_treeLock)
{
Update(va, 0UL, size, MappingType.None);
}
}
private void Update(ulong va, ulong pa, ulong size, MappingType type)
{
Mapping map = _mappingTree.GetNode(new Mapping(va, 1UL, MappingType.None));
Update(map, va, pa, size, type);
}
private Mapping Update(Mapping map, ulong va, ulong pa, ulong size, MappingType type)
{
ulong endAddress = va + size;
for (; map != null; map = map.Successor)
{
if (map.Address < va)
{
_mappingTree.Add(map.Split(va));
}
if (map.EndAddress > endAddress)
{
Mapping newMap = map.Split(endAddress);
_mappingTree.Add(newMap);
map = newMap;
}
switch (type)
{
case MappingType.None:
if (map.Type == MappingType.Shared)
{
ulong startOffset = map.Address - va;
ulong mapVa = va + startOffset;
ulong mapSize = Math.Min(size - startOffset, map.Size);
ulong mapEndAddress = mapVa + mapSize;
ulong alignment = MemoryBlock.GetPageSize();
mapVa = BitUtils.AlignDown(mapVa, alignment);
mapEndAddress = BitUtils.AlignUp(mapEndAddress, alignment);
mapSize = mapEndAddress - mapVa;
Base.UnmapView(_backingMemory, mapVa, mapSize);
Mirror.UnmapView(_backingMemory, mapVa, mapSize);
}
else
{
UnmapPrivate(va, size);
}
break;
case MappingType.Private:
if (map.Type == MappingType.Shared)
{
throw new InvalidMemoryRegionException($"Private mapping request at 0x{va:X} with size 0x{size:X} overlaps shared mapping at 0x{map.Address:X} with size 0x{map.Size:X}.");
}
else
{
MapPrivate(va, size);
}
break;
case MappingType.Shared:
if (map.Type != MappingType.None)
{
throw new InvalidMemoryRegionException($"Shared mapping request at 0x{va:X} with size 0x{size:X} overlaps mapping at 0x{map.Address:X} with size 0x{map.Size:X}.");
}
else
{
ulong startOffset = map.Address - va;
ulong mapPa = pa + startOffset;
ulong mapVa = va + startOffset;
ulong mapSize = Math.Min(size - startOffset, map.Size);
Base.MapView(_backingMemory, mapPa, mapVa, mapSize);
Mirror.MapView(_backingMemory, mapPa, mapVa, mapSize);
}
break;
}
map.UpdateState(type);
map = TryCoalesce(map);
if (map.EndAddress >= endAddress)
{
break;
}
}
return map;
}
private Mapping TryCoalesce(Mapping map)
{
Mapping previousMap = map.Predecessor;
Mapping nextMap = map.Successor;
if (previousMap != null && CanCoalesce(previousMap, map))
{
previousMap.Extend(map.Size);
_mappingTree.Remove(map);
map = previousMap;
}
if (nextMap != null && CanCoalesce(map, nextMap))
{
map.Extend(nextMap.Size);
_mappingTree.Remove(nextMap);
}
return map;
}
private static bool CanCoalesce(Mapping left, Mapping right)
{
return left.Type == right.Type;
}
private void MapPrivate(ulong va, ulong size)
{
ulong endAddress = va + size;
ulong alignment = MemoryBlock.GetPageSize();
// Expand the range outwards based on page size to ensure that at least the requested region is mapped.
ulong vaAligned = BitUtils.AlignDown(va, alignment);
ulong endAddressAligned = BitUtils.AlignUp(endAddress, alignment);
PrivateMapping map = _privateTree.GetNode(new PrivateMapping(va, 1UL, default));
for (; map != null; map = map.Successor)
{
if (!map.PrivateAllocation.IsValid)
{
if (map.Address < vaAligned)
{
_privateTree.Add(map.Split(vaAligned));
}
if (map.EndAddress > endAddressAligned)
{
PrivateMapping newMap = map.Split(endAddressAligned);
_privateTree.Add(newMap);
map = newMap;
}
map.Map(Base, Mirror, _privateMemoryAllocator.Allocate(map.Size, MemoryBlock.GetPageSize()));
}
if (map.EndAddress >= endAddressAligned)
{
break;
}
}
}
private void UnmapPrivate(ulong va, ulong size)
{
ulong endAddress = va + size;
ulong alignment = MemoryBlock.GetPageSize();
// Shrink the range inwards based on page size to ensure we won't unmap memory that might be still in use.
ulong vaAligned = BitUtils.AlignUp(va, alignment);
ulong endAddressAligned = BitUtils.AlignDown(endAddress, alignment);
if (endAddressAligned <= vaAligned)
{
return;
}
PrivateMapping map = _privateTree.GetNode(new PrivateMapping(va, 1UL, default));
for (; map != null; map = map.Successor)
{
if (map.PrivateAllocation.IsValid)
{
if (map.Address < vaAligned)
{
_privateTree.Add(map.Split(vaAligned));
}
if (map.EndAddress > endAddressAligned)
{
PrivateMapping newMap = map.Split(endAddressAligned);
_privateTree.Add(newMap);
map = newMap;
}
map.Unmap(Base, Mirror);
map = TryCoalesce(map);
}
if (map.EndAddress >= endAddressAligned)
{
break;
}
}
}
private PrivateMapping TryCoalesce(PrivateMapping map)
{
PrivateMapping previousMap = map.Predecessor;
PrivateMapping nextMap = map.Successor;
if (previousMap != null && CanCoalesce(previousMap, map))
{
previousMap.Extend(map.Size);
_privateTree.Remove(map);
map = previousMap;
}
if (nextMap != null && CanCoalesce(map, nextMap))
{
map.Extend(nextMap.Size);
_privateTree.Remove(nextMap);
}
return map;
}
private static bool CanCoalesce(PrivateMapping left, PrivateMapping right)
{
return !left.PrivateAllocation.IsValid && !right.PrivateAllocation.IsValid;
Base.UnmapView(_backingMemory, va, size);
Mirror.UnmapView(_backingMemory, va, size);
}
public void Dispose()
{
GC.SuppressFinalize(this);
_privateMemoryAllocator?.Dispose();
Base.Dispose();
Mirror.Dispose();
}

View File

@@ -38,7 +38,7 @@ namespace Ryujinx.Cpu.AppleHv
private readonly HvIpaAllocator _ipaAllocator;
public HvMemoryBlockAllocator(HvIpaAllocator ipaAllocator, int blockAlignment) : base(blockAlignment, MemoryAllocationFlags.None)
public HvMemoryBlockAllocator(HvIpaAllocator ipaAllocator, ulong blockAlignment) : base(blockAlignment, MemoryAllocationFlags.None)
{
_ipaAllocator = ipaAllocator;
}

View File

@@ -3,12 +3,11 @@ using Ryujinx.Memory;
using Ryujinx.Memory.Range;
using Ryujinx.Memory.Tracking;
using System;
using System.Buffers;
using System.Collections.Generic;
using System.Linq;
using System.Runtime.CompilerServices;
using System.Runtime.InteropServices;
using System.Runtime.Versioning;
using System.Threading;
namespace Ryujinx.Cpu.AppleHv
{
@@ -16,23 +15,8 @@ namespace Ryujinx.Cpu.AppleHv
/// Represents a CPU memory manager which maps guest virtual memory directly onto the Hypervisor page table.
/// </summary>
[SupportedOSPlatform("macos")]
public class HvMemoryManager : VirtualMemoryManagerRefCountedBase<ulong, ulong>, IMemoryManager, IVirtualMemoryManagerTracked, IWritableBlock
public sealed class HvMemoryManager : VirtualMemoryManagerRefCountedBase, IMemoryManager, IVirtualMemoryManagerTracked
{
public const int PageToPteShift = 5; // 32 pages (2 bits each) in one ulong page table entry.
public const ulong BlockMappedMask = 0x5555555555555555; // First bit of each table entry set.
private enum HostMappedPtBits : ulong
{
Unmapped = 0,
Mapped,
WriteTracked,
ReadWriteTracked,
MappedReplicated = 0x5555555555555555,
WriteTrackedReplicated = 0xaaaaaaaaaaaaaaaa,
ReadWriteTrackedReplicated = ulong.MaxValue,
}
private readonly InvalidAccessHandler _invalidAccessHandler;
private readonly HvAddressSpace _addressSpace;
@@ -42,9 +26,9 @@ namespace Ryujinx.Cpu.AppleHv
private readonly MemoryBlock _backingMemory;
private readonly PageTable<ulong> _pageTable;
private readonly ulong[] _pageBitmap;
private readonly ManagedPageFlags _pages;
public bool Supports4KBPages => true;
public bool UsesPrivateAllocations => false;
public int AddressSpaceBits { get; }
@@ -84,7 +68,7 @@ namespace Ryujinx.Cpu.AppleHv
AddressSpaceBits = asBits;
_pageBitmap = new ulong[1 << (AddressSpaceBits - (PageBits + PageToPteShift))];
_pages = new ManagedPageFlags(AddressSpaceBits);
Tracking = new MemoryTracking(this, PageSize, invalidAccessHandler);
}
@@ -95,7 +79,7 @@ namespace Ryujinx.Cpu.AppleHv
PtMap(va, pa, size);
_addressSpace.MapUser(va, pa, size, MemoryPermission.ReadWriteExecute);
AddMapping(va, size);
_pages.AddMapping(va, size);
Tracking.Map(va, size);
}
@@ -112,12 +96,6 @@ namespace Ryujinx.Cpu.AppleHv
}
}
/// <inheritdoc/>
public void MapForeign(ulong va, nuint hostPointer, ulong size)
{
throw new NotSupportedException();
}
/// <inheritdoc/>
public void Unmap(ulong va, ulong size)
{
@@ -126,7 +104,7 @@ namespace Ryujinx.Cpu.AppleHv
UnmapEvent?.Invoke(va, size);
Tracking.Unmap(va, size);
RemoveMapping(va, size);
_pages.RemoveMapping(va, size);
_addressSpace.UnmapUser(va, size);
PtUnmap(va, size);
}
@@ -142,20 +120,11 @@ namespace Ryujinx.Cpu.AppleHv
}
}
/// <inheritdoc/>
public T Read<T>(ulong va) where T : unmanaged
{
return MemoryMarshal.Cast<byte, T>(GetSpan(va, Unsafe.SizeOf<T>()))[0];
}
/// <inheritdoc/>
public T ReadTracked<T>(ulong va) where T : unmanaged
public override T ReadTracked<T>(ulong va)
{
try
{
SignalMemoryTracking(va, (ulong)Unsafe.SizeOf<T>(), false);
return Read<T>(va);
return base.ReadTracked<T>(va);
}
catch (InvalidMemoryRegionException)
{
@@ -168,7 +137,6 @@ namespace Ryujinx.Cpu.AppleHv
}
}
/// <inheritdoc/>
public override void Read(ulong va, Span<byte> data)
{
try
@@ -184,101 +152,11 @@ namespace Ryujinx.Cpu.AppleHv
}
}
/// <inheritdoc/>
public void Write<T>(ulong va, T value) where T : unmanaged
{
Write(va, MemoryMarshal.Cast<T, byte>(MemoryMarshal.CreateSpan(ref value, 1)));
}
/// <inheritdoc/>
public void Write(ulong va, ReadOnlySpan<byte> data)
{
if (data.Length == 0)
{
return;
}
SignalMemoryTracking(va, (ulong)data.Length, true);
WriteImpl(va, data);
}
/// <inheritdoc/>
public void WriteUntracked(ulong va, ReadOnlySpan<byte> data)
{
if (data.Length == 0)
{
return;
}
WriteImpl(va, data);
}
/// <inheritdoc/>
public bool WriteWithRedundancyCheck(ulong va, ReadOnlySpan<byte> data)
{
if (data.Length == 0)
{
return false;
}
SignalMemoryTracking(va, (ulong)data.Length, false);
if (IsContiguousAndMapped(va, data.Length))
{
var target = _backingMemory.GetSpan(GetPhysicalAddressInternal(va), data.Length);
bool changed = !data.SequenceEqual(target);
if (changed)
{
data.CopyTo(target);
}
return changed;
}
else
{
WriteImpl(va, data);
return true;
}
}
private void WriteImpl(ulong va, ReadOnlySpan<byte> data)
public override void Write(ulong va, ReadOnlySpan<byte> data)
{
try
{
AssertValidAddressAndSize(va, (ulong)data.Length);
if (IsContiguousAndMapped(va, data.Length))
{
data.CopyTo(_backingMemory.GetSpan(GetPhysicalAddressInternal(va), data.Length));
}
else
{
int offset = 0, size;
if ((va & PageMask) != 0)
{
ulong pa = GetPhysicalAddressChecked(va);
size = Math.Min(data.Length, PageSize - (int)(va & PageMask));
data[..size].CopyTo(_backingMemory.GetSpan(pa, size));
offset += size;
}
for (; offset < data.Length; offset += size)
{
ulong pa = GetPhysicalAddressChecked(va + (ulong)offset);
size = Math.Min(data.Length - offset, PageSize);
data.Slice(offset, size).CopyTo(_backingMemory.GetSpan(pa, size));
}
}
base.Write(va, data);
}
catch (InvalidMemoryRegionException)
{
@@ -289,61 +167,38 @@ namespace Ryujinx.Cpu.AppleHv
}
}
/// <inheritdoc/>
public ReadOnlySpan<byte> GetSpan(ulong va, int size, bool tracked = false)
public override void WriteUntracked(ulong va, ReadOnlySpan<byte> data)
{
if (size == 0)
try
{
return ReadOnlySpan<byte>.Empty;
base.WriteUntracked(va, data);
}
if (tracked)
catch (InvalidMemoryRegionException)
{
SignalMemoryTracking(va, (ulong)size, false);
}
if (IsContiguousAndMapped(va, size))
{
return _backingMemory.GetSpan(GetPhysicalAddressInternal(va), size);
}
else
{
Span<byte> data = new byte[size];
base.Read(va, data);
return data;
if (_invalidAccessHandler == null || !_invalidAccessHandler(va))
{
throw;
}
}
}
/// <inheritdoc/>
public WritableRegion GetWritableRegion(ulong va, int size, bool tracked = false)
public override ReadOnlySequence<byte> GetReadOnlySequence(ulong va, int size, bool tracked = false)
{
if (size == 0)
try
{
return new WritableRegion(null, va, Memory<byte>.Empty);
return base.GetReadOnlySequence(va, size, tracked);
}
if (tracked)
catch (InvalidMemoryRegionException)
{
SignalMemoryTracking(va, (ulong)size, true);
}
if (_invalidAccessHandler == null || !_invalidAccessHandler(va))
{
throw;
}
if (IsContiguousAndMapped(va, size))
{
return new WritableRegion(null, va, _backingMemory.GetMemory(GetPhysicalAddressInternal(va), size));
}
else
{
Memory<byte> memory = new byte[size];
base.Read(va, memory.Span);
return new WritableRegion(this, va, memory);
return ReadOnlySequence<byte>.Empty;
}
}
/// <inheritdoc/>
public ref T GetRef<T>(ulong va) where T : unmanaged
{
if (!IsContiguous(va, Unsafe.SizeOf<T>()))
@@ -356,26 +211,10 @@ namespace Ryujinx.Cpu.AppleHv
return ref _backingMemory.GetRef<T>(GetPhysicalAddressChecked(va));
}
/// <inheritdoc/>
[MethodImpl(MethodImplOptions.AggressiveInlining)]
public bool IsMapped(ulong va)
public override bool IsMapped(ulong va)
{
return ValidateAddress(va) && IsMappedImpl(va);
}
[MethodImpl(MethodImplOptions.AggressiveInlining)]
private bool IsMappedImpl(ulong va)
{
ulong page = va >> PageBits;
int bit = (int)((page & 31) << 1);
int pageIndex = (int)(page >> PageToPteShift);
ref ulong pageRef = ref _pageBitmap[pageIndex];
ulong pte = Volatile.Read(ref pageRef);
return ((pte >> bit) & 3) != 0;
return ValidateAddress(va) && _pages.IsMapped(va);
}
/// <inheritdoc/>
@@ -383,91 +222,7 @@ namespace Ryujinx.Cpu.AppleHv
{
AssertValidAddressAndSize(va, size);
return IsRangeMappedImpl(va, size);
}
[MethodImpl(MethodImplOptions.AggressiveInlining)]
private static void GetPageBlockRange(ulong pageStart, ulong pageEnd, out ulong startMask, out ulong endMask, out int pageIndex, out int pageEndIndex)
{
startMask = ulong.MaxValue << ((int)(pageStart & 31) << 1);
endMask = ulong.MaxValue >> (64 - ((int)(pageEnd & 31) << 1));
pageIndex = (int)(pageStart >> PageToPteShift);
pageEndIndex = (int)((pageEnd - 1) >> PageToPteShift);
}
private bool IsRangeMappedImpl(ulong va, ulong size)
{
int pages = GetPagesCount(va, size, out _);
if (pages == 1)
{
return IsMappedImpl(va);
}
ulong pageStart = va >> PageBits;
ulong pageEnd = pageStart + (ulong)pages;
GetPageBlockRange(pageStart, pageEnd, out ulong startMask, out ulong endMask, out int pageIndex, out int pageEndIndex);
// Check if either bit in each 2 bit page entry is set.
// OR the block with itself shifted down by 1, and check the first bit of each entry.
ulong mask = BlockMappedMask & startMask;
while (pageIndex <= pageEndIndex)
{
if (pageIndex == pageEndIndex)
{
mask &= endMask;
}
ref ulong pageRef = ref _pageBitmap[pageIndex++];
ulong pte = Volatile.Read(ref pageRef);
pte |= pte >> 1;
if ((pte & mask) != mask)
{
return false;
}
mask = BlockMappedMask;
}
return true;
}
private static void ThrowMemoryNotContiguous() => throw new MemoryNotContiguousException();
[MethodImpl(MethodImplOptions.AggressiveInlining)]
private bool IsContiguousAndMapped(ulong va, int size) => IsContiguous(va, size) && IsMapped(va);
[MethodImpl(MethodImplOptions.AggressiveInlining)]
private bool IsContiguous(ulong va, int size)
{
if (!ValidateAddress(va) || !ValidateAddressAndSize(va, (ulong)size))
{
return false;
}
int pages = GetPagesCount(va, (uint)size, out va);
for (int page = 0; page < pages - 1; page++)
{
if (!ValidateAddress(va + PageSize))
{
return false;
}
if (GetPhysicalAddressInternal(va) + PageSize != GetPhysicalAddressInternal(va + PageSize))
{
return false;
}
va += PageSize;
}
return true;
return _pages.IsRangeMapped(va, size);
}
/// <inheritdoc/>
@@ -546,11 +301,10 @@ namespace Ryujinx.Cpu.AppleHv
return regions;
}
/// <inheritdoc/>
/// <remarks>
/// This function also validates that the given range is both valid and mapped, and will throw if it is not.
/// </remarks>
public void SignalMemoryTracking(ulong va, ulong size, bool write, bool precise = false, int? exemptId = null)
public override void SignalMemoryTracking(ulong va, ulong size, bool write, bool precise = false, int? exemptId = null)
{
AssertValidAddressAndSize(va, size);
@@ -560,199 +314,37 @@ namespace Ryujinx.Cpu.AppleHv
return;
}
// Software table, used for managed memory tracking.
int pages = GetPagesCount(va, size, out _);
ulong pageStart = va >> PageBits;
if (pages == 1)
{
ulong tag = (ulong)(write ? HostMappedPtBits.WriteTracked : HostMappedPtBits.ReadWriteTracked);
int bit = (int)((pageStart & 31) << 1);
int pageIndex = (int)(pageStart >> PageToPteShift);
ref ulong pageRef = ref _pageBitmap[pageIndex];
ulong pte = Volatile.Read(ref pageRef);
ulong state = ((pte >> bit) & 3);
if (state >= tag)
{
Tracking.VirtualMemoryEvent(va, size, write, precise: false, exemptId);
return;
}
else if (state == 0)
{
ThrowInvalidMemoryRegionException($"Not mapped: va=0x{va:X16}, size=0x{size:X16}");
}
}
else
{
ulong pageEnd = pageStart + (ulong)pages;
GetPageBlockRange(pageStart, pageEnd, out ulong startMask, out ulong endMask, out int pageIndex, out int pageEndIndex);
ulong mask = startMask;
ulong anyTrackingTag = (ulong)HostMappedPtBits.WriteTrackedReplicated;
while (pageIndex <= pageEndIndex)
{
if (pageIndex == pageEndIndex)
{
mask &= endMask;
}
ref ulong pageRef = ref _pageBitmap[pageIndex++];
ulong pte = Volatile.Read(ref pageRef);
ulong mappedMask = mask & BlockMappedMask;
ulong mappedPte = pte | (pte >> 1);
if ((mappedPte & mappedMask) != mappedMask)
{
ThrowInvalidMemoryRegionException($"Not mapped: va=0x{va:X16}, size=0x{size:X16}");
}
pte &= mask;
if ((pte & anyTrackingTag) != 0) // Search for any tracking.
{
// Writes trigger any tracking.
// Only trigger tracking from reads if both bits are set on any page.
if (write || (pte & (pte >> 1) & BlockMappedMask) != 0)
{
Tracking.VirtualMemoryEvent(va, size, write, precise: false, exemptId);
break;
}
}
mask = ulong.MaxValue;
}
}
_pages.SignalMemoryTracking(Tracking, va, size, write, exemptId);
}
/// <summary>
/// Computes the number of pages in a virtual address range.
/// </summary>
/// <param name="va">Virtual address of the range</param>
/// <param name="size">Size of the range</param>
/// <param name="startVa">The virtual address of the beginning of the first page</param>
/// <remarks>This function does not differentiate between allocated and unallocated pages.</remarks>
[MethodImpl(MethodImplOptions.AggressiveInlining)]
private static int GetPagesCount(ulong va, ulong size, out ulong startVa)
{
// WARNING: Always check if ulong does not overflow during the operations.
startVa = va & ~(ulong)PageMask;
ulong vaSpan = (va - startVa + size + PageMask) & ~(ulong)PageMask;
return (int)(vaSpan / PageSize);
}
/// <inheritdoc/>
public void Reprotect(ulong va, ulong size, MemoryPermission protection)
{
// TODO
}
/// <inheritdoc/>
public void TrackingReprotect(ulong va, ulong size, MemoryPermission protection)
public void TrackingReprotect(ulong va, ulong size, MemoryPermission protection, bool guest)
{
// Protection is inverted on software pages, since the default value is 0.
protection = (~protection) & MemoryPermission.ReadAndWrite;
int pages = GetPagesCount(va, size, out va);
ulong pageStart = va >> PageBits;
if (pages == 1)
if (guest)
{
ulong protTag = protection switch
{
MemoryPermission.None => (ulong)HostMappedPtBits.Mapped,
MemoryPermission.Write => (ulong)HostMappedPtBits.WriteTracked,
_ => (ulong)HostMappedPtBits.ReadWriteTracked,
};
int bit = (int)((pageStart & 31) << 1);
ulong tagMask = 3UL << bit;
ulong invTagMask = ~tagMask;
ulong tag = protTag << bit;
int pageIndex = (int)(pageStart >> PageToPteShift);
ref ulong pageRef = ref _pageBitmap[pageIndex];
ulong pte;
do
{
pte = Volatile.Read(ref pageRef);
}
while ((pte & tagMask) != 0 && Interlocked.CompareExchange(ref pageRef, (pte & invTagMask) | tag, pte) != pte);
_addressSpace.ReprotectUser(va, size, protection);
}
else
{
ulong pageEnd = pageStart + (ulong)pages;
GetPageBlockRange(pageStart, pageEnd, out ulong startMask, out ulong endMask, out int pageIndex, out int pageEndIndex);
ulong mask = startMask;
ulong protTag = protection switch
{
MemoryPermission.None => (ulong)HostMappedPtBits.MappedReplicated,
MemoryPermission.Write => (ulong)HostMappedPtBits.WriteTrackedReplicated,
_ => (ulong)HostMappedPtBits.ReadWriteTrackedReplicated,
};
while (pageIndex <= pageEndIndex)
{
if (pageIndex == pageEndIndex)
{
mask &= endMask;
}
ref ulong pageRef = ref _pageBitmap[pageIndex++];
ulong pte;
ulong mappedMask;
// Change the protection of all 2 bit entries that are mapped.
do
{
pte = Volatile.Read(ref pageRef);
mappedMask = pte | (pte >> 1);
mappedMask |= (mappedMask & BlockMappedMask) << 1;
mappedMask &= mask; // Only update mapped pages within the given range.
}
while (Interlocked.CompareExchange(ref pageRef, (pte & (~mappedMask)) | (protTag & mappedMask), pte) != pte);
mask = ulong.MaxValue;
}
_pages.TrackingReprotect(va, size, protection);
}
protection = protection switch
{
MemoryPermission.None => MemoryPermission.ReadAndWrite,
MemoryPermission.Write => MemoryPermission.Read,
_ => MemoryPermission.None,
};
_addressSpace.ReprotectUser(va, size, protection);
}
/// <inheritdoc/>
public RegionHandle BeginTracking(ulong address, ulong size, int id)
public RegionHandle BeginTracking(ulong address, ulong size, int id, RegionFlags flags = RegionFlags.None)
{
return Tracking.BeginTracking(address, size, id);
return Tracking.BeginTracking(address, size, id, flags);
}
/// <inheritdoc/>
public MultiRegionHandle BeginGranularTracking(ulong address, ulong size, IEnumerable<IRegionHandle> handles, ulong granularity, int id)
public MultiRegionHandle BeginGranularTracking(ulong address, ulong size, IEnumerable<IRegionHandle> handles, ulong granularity, int id, RegionFlags flags = RegionFlags.None)
{
return Tracking.BeginGranularTracking(address, size, handles, granularity, id);
return Tracking.BeginGranularTracking(address, size, handles, granularity, id, flags);
}
/// <inheritdoc/>
@@ -761,87 +353,7 @@ namespace Ryujinx.Cpu.AppleHv
return Tracking.BeginSmartGranularTracking(address, size, granularity, id);
}
/// <summary>
/// Adds the given address mapping to the page table.
/// </summary>
/// <param name="va">Virtual memory address</param>
/// <param name="size">Size to be mapped</param>
private void AddMapping(ulong va, ulong size)
{
int pages = GetPagesCount(va, size, out _);
ulong pageStart = va >> PageBits;
ulong pageEnd = pageStart + (ulong)pages;
GetPageBlockRange(pageStart, pageEnd, out ulong startMask, out ulong endMask, out int pageIndex, out int pageEndIndex);
ulong mask = startMask;
while (pageIndex <= pageEndIndex)
{
if (pageIndex == pageEndIndex)
{
mask &= endMask;
}
ref ulong pageRef = ref _pageBitmap[pageIndex++];
ulong pte;
ulong mappedMask;
// Map all 2-bit entries that are unmapped.
do
{
pte = Volatile.Read(ref pageRef);
mappedMask = pte | (pte >> 1);
mappedMask |= (mappedMask & BlockMappedMask) << 1;
mappedMask |= ~mask; // Treat everything outside the range as mapped, thus unchanged.
}
while (Interlocked.CompareExchange(ref pageRef, (pte & mappedMask) | (BlockMappedMask & (~mappedMask)), pte) != pte);
mask = ulong.MaxValue;
}
}
/// <summary>
/// Removes the given address mapping from the page table.
/// </summary>
/// <param name="va">Virtual memory address</param>
/// <param name="size">Size to be unmapped</param>
private void RemoveMapping(ulong va, ulong size)
{
int pages = GetPagesCount(va, size, out _);
ulong pageStart = va >> PageBits;
ulong pageEnd = pageStart + (ulong)pages;
GetPageBlockRange(pageStart, pageEnd, out ulong startMask, out ulong endMask, out int pageIndex, out int pageEndIndex);
startMask = ~startMask;
endMask = ~endMask;
ulong mask = startMask;
while (pageIndex <= pageEndIndex)
{
if (pageIndex == pageEndIndex)
{
mask |= endMask;
}
ref ulong pageRef = ref _pageBitmap[pageIndex++];
ulong pte;
do
{
pte = Volatile.Read(ref pageRef);
}
while (Interlocked.CompareExchange(ref pageRef, pte & mask, pte) != pte);
mask = 0;
}
}
private ulong GetPhysicalAddressChecked(ulong va)
private nuint GetPhysicalAddressChecked(ulong va)
{
if (!IsMapped(va))
{
@@ -851,9 +363,9 @@ namespace Ryujinx.Cpu.AppleHv
return GetPhysicalAddressInternal(va);
}
private ulong GetPhysicalAddressInternal(ulong va)
private nuint GetPhysicalAddressInternal(ulong va)
{
return _pageTable.Read(va) + (va & PageMask);
return (nuint)(_pageTable.Read(va) + (va & PageMask));
}
/// <summary>
@@ -864,10 +376,17 @@ namespace Ryujinx.Cpu.AppleHv
_addressSpace.Dispose();
}
protected override Span<byte> GetPhysicalAddressSpan(ulong pa, int size)
protected override Memory<byte> GetPhysicalAddressMemory(nuint pa, int size)
=> _backingMemory.GetMemory(pa, size);
protected override Span<byte> GetPhysicalAddressSpan(nuint pa, int size)
=> _backingMemory.GetSpan(pa, size);
protected override ulong TranslateVirtualAddressForRead(ulong va)
protected override nuint TranslateVirtualAddressChecked(ulong va)
=> GetPhysicalAddressChecked(va);
protected override nuint TranslateVirtualAddressUnchecked(ulong va)
=> GetPhysicalAddressInternal(va);
}
}

View File

@@ -28,8 +28,9 @@ namespace Ryujinx.Cpu
/// <param name="address">CPU virtual address of the region</param>
/// <param name="size">Size of the region</param>
/// <param name="id">Handle ID</param>
/// <param name="flags">Region flags</param>
/// <returns>The memory tracking handle</returns>
RegionHandle BeginTracking(ulong address, ulong size, int id);
RegionHandle BeginTracking(ulong address, ulong size, int id, RegionFlags flags = RegionFlags.None);
/// <summary>
/// Obtains a memory tracking handle for the given virtual region, with a specified granularity. This should be disposed when finished with.
@@ -39,8 +40,9 @@ namespace Ryujinx.Cpu
/// <param name="handles">Handles to inherit state from or reuse. When none are present, provide null</param>
/// <param name="granularity">Desired granularity of write tracking</param>
/// <param name="id">Handle ID</param>
/// <param name="flags">Region flags</param>
/// <returns>The memory tracking handle</returns>
MultiRegionHandle BeginGranularTracking(ulong address, ulong size, IEnumerable<IRegionHandle> handles, ulong granularity, int id);
MultiRegionHandle BeginGranularTracking(ulong address, ulong size, IEnumerable<IRegionHandle> handles, ulong granularity, int id, RegionFlags flags = RegionFlags.None);
/// <summary>
/// Obtains a smart memory tracking handle for the given virtual region, with a specified granularity. This should be disposed when finished with.

View File

@@ -0,0 +1,35 @@
using Ryujinx.Common.Collections;
using System;
namespace Ryujinx.Cpu.Jit.HostTracked
{
internal class AddressIntrusiveRedBlackTree<T> : IntrusiveRedBlackTree<T> where T : IntrusiveRedBlackTreeNode<T>, IComparable<T>, IComparable<ulong>
{
/// <summary>
/// Retrieve the node that is considered equal to the specified address by the comparator.
/// </summary>
/// <param name="address">Address to compare with</param>
/// <returns>Node that is equal to <paramref name="address"/></returns>
public T GetNode(ulong address)
{
T node = Root;
while (node != null)
{
int cmp = node.CompareTo(address);
if (cmp < 0)
{
node = node.Left;
}
else if (cmp > 0)
{
node = node.Right;
}
else
{
return node;
}
}
return null;
}
}
}

View File

@@ -0,0 +1,708 @@
using Ryujinx.Common;
using Ryujinx.Common.Collections;
using Ryujinx.Memory;
using System;
using System.Diagnostics;
using System.Threading;
namespace Ryujinx.Cpu.Jit.HostTracked
{
readonly struct PrivateRange
{
public readonly MemoryBlock Memory;
public readonly ulong Offset;
public readonly ulong Size;
public static PrivateRange Empty => new(null, 0, 0);
public PrivateRange(MemoryBlock memory, ulong offset, ulong size)
{
Memory = memory;
Offset = offset;
Size = size;
}
}
class AddressSpacePartition : IDisposable
{
public const ulong GuestPageSize = 0x1000;
private const int DefaultBlockAlignment = 1 << 20;
private enum MappingType : byte
{
None,
Private,
}
private class Mapping : IntrusiveRedBlackTreeNode<Mapping>, IComparable<Mapping>, IComparable<ulong>
{
public ulong Address { get; private set; }
public ulong Size { get; private set; }
public ulong EndAddress => Address + Size;
public MappingType Type { get; private set; }
public Mapping(ulong address, ulong size, MappingType type)
{
Address = address;
Size = size;
Type = type;
}
public Mapping Split(ulong splitAddress)
{
ulong leftSize = splitAddress - Address;
ulong rightSize = EndAddress - splitAddress;
Mapping left = new(Address, leftSize, Type);
Address = splitAddress;
Size = rightSize;
return left;
}
public void UpdateState(MappingType newType)
{
Type = newType;
}
public void Extend(ulong sizeDelta)
{
Size += sizeDelta;
}
public int CompareTo(Mapping other)
{
if (Address < other.Address)
{
return -1;
}
else if (Address <= other.EndAddress - 1UL)
{
return 0;
}
else
{
return 1;
}
}
public int CompareTo(ulong address)
{
if (address < Address)
{
return -1;
}
else if (address <= EndAddress - 1UL)
{
return 0;
}
else
{
return 1;
}
}
}
private class PrivateMapping : IntrusiveRedBlackTreeNode<PrivateMapping>, IComparable<PrivateMapping>, IComparable<ulong>
{
public ulong Address { get; private set; }
public ulong Size { get; private set; }
public ulong EndAddress => Address + Size;
public PrivateMemoryAllocation PrivateAllocation { get; private set; }
public PrivateMapping(ulong address, ulong size, PrivateMemoryAllocation privateAllocation)
{
Address = address;
Size = size;
PrivateAllocation = privateAllocation;
}
public PrivateMapping Split(ulong splitAddress)
{
ulong leftSize = splitAddress - Address;
ulong rightSize = EndAddress - splitAddress;
Debug.Assert(leftSize > 0);
Debug.Assert(rightSize > 0);
(var leftAllocation, PrivateAllocation) = PrivateAllocation.Split(leftSize);
PrivateMapping left = new(Address, leftSize, leftAllocation);
Address = splitAddress;
Size = rightSize;
return left;
}
public void Map(AddressSpacePartitionMultiAllocation baseBlock, ulong baseAddress, PrivateMemoryAllocation newAllocation)
{
baseBlock.MapView(newAllocation.Memory, newAllocation.Offset, Address - baseAddress, Size);
PrivateAllocation = newAllocation;
}
public void Unmap(AddressSpacePartitionMultiAllocation baseBlock, ulong baseAddress)
{
if (PrivateAllocation.IsValid)
{
baseBlock.UnmapView(PrivateAllocation.Memory, Address - baseAddress, Size);
PrivateAllocation.Dispose();
}
PrivateAllocation = default;
}
public void Extend(ulong sizeDelta)
{
Size += sizeDelta;
}
public int CompareTo(PrivateMapping other)
{
if (Address < other.Address)
{
return -1;
}
else if (Address <= other.EndAddress - 1UL)
{
return 0;
}
else
{
return 1;
}
}
public int CompareTo(ulong address)
{
if (address < Address)
{
return -1;
}
else if (address <= EndAddress - 1UL)
{
return 0;
}
else
{
return 1;
}
}
}
private readonly MemoryBlock _backingMemory;
private readonly AddressSpacePartitionMultiAllocation _baseMemory;
private readonly PrivateMemoryAllocator _privateMemoryAllocator;
private readonly AddressIntrusiveRedBlackTree<Mapping> _mappingTree;
private readonly AddressIntrusiveRedBlackTree<PrivateMapping> _privateTree;
private readonly ReaderWriterLockSlim _treeLock;
private readonly ulong _hostPageSize;
private ulong? _firstPagePa;
private ulong? _lastPagePa;
private ulong _cachedFirstPagePa;
private ulong _cachedLastPagePa;
private MemoryBlock _firstPageMemoryForUnmap;
private ulong _firstPageOffsetForLateMap;
private MemoryPermission _firstPageMemoryProtection;
public ulong Address { get; }
public ulong Size { get; }
public ulong EndAddress => Address + Size;
public AddressSpacePartition(AddressSpacePartitionAllocation baseMemory, MemoryBlock backingMemory, ulong address, ulong size)
{
_privateMemoryAllocator = new PrivateMemoryAllocator(DefaultBlockAlignment, MemoryAllocationFlags.Mirrorable);
_mappingTree = new AddressIntrusiveRedBlackTree<Mapping>();
_privateTree = new AddressIntrusiveRedBlackTree<PrivateMapping>();
_treeLock = new ReaderWriterLockSlim();
_mappingTree.Add(new Mapping(address, size, MappingType.None));
_privateTree.Add(new PrivateMapping(address, size, default));
_hostPageSize = MemoryBlock.GetPageSize();
_backingMemory = backingMemory;
_baseMemory = new(baseMemory);
_cachedFirstPagePa = ulong.MaxValue;
_cachedLastPagePa = ulong.MaxValue;
Address = address;
Size = size;
}
public bool IsEmpty()
{
_treeLock.EnterReadLock();
try
{
Mapping map = _mappingTree.GetNode(Address);
return map != null && map.Address == Address && map.Size == Size && map.Type == MappingType.None;
}
finally
{
_treeLock.ExitReadLock();
}
}
public void Map(ulong va, ulong pa, ulong size)
{
Debug.Assert(va >= Address);
Debug.Assert(va + size <= EndAddress);
if (va == Address)
{
_firstPagePa = pa;
}
if (va <= EndAddress - GuestPageSize && va + size > EndAddress - GuestPageSize)
{
_lastPagePa = pa + ((EndAddress - GuestPageSize) - va);
}
Update(va, pa, size, MappingType.Private);
}
public void Unmap(ulong va, ulong size)
{
Debug.Assert(va >= Address);
Debug.Assert(va + size <= EndAddress);
if (va == Address)
{
_firstPagePa = null;
}
if (va <= EndAddress - GuestPageSize && va + size > EndAddress - GuestPageSize)
{
_lastPagePa = null;
}
Update(va, 0UL, size, MappingType.None);
}
public void ReprotectAligned(ulong va, ulong size, MemoryPermission protection)
{
Debug.Assert(va >= Address);
Debug.Assert(va + size <= EndAddress);
_baseMemory.Reprotect(va - Address, size, protection, false);
if (va == Address)
{
_firstPageMemoryProtection = protection;
}
}
public void Reprotect(
ulong va,
ulong size,
MemoryPermission protection,
AddressSpacePartitioned addressSpace,
Action<ulong, IntPtr, ulong> updatePtCallback)
{
if (_baseMemory.LazyInitMirrorForProtection(addressSpace, Address, Size, protection))
{
LateMap();
}
updatePtCallback(va, _baseMemory.GetPointerForProtection(va - Address, size, protection), size);
}
public IntPtr GetPointer(ulong va, ulong size)
{
Debug.Assert(va >= Address);
Debug.Assert(va + size <= EndAddress);
return _baseMemory.GetPointer(va - Address, size);
}
public void InsertBridgeAtEnd(AddressSpacePartition partitionAfter, bool useProtectionMirrors)
{
ulong firstPagePa = partitionAfter?._firstPagePa ?? ulong.MaxValue;
ulong lastPagePa = _lastPagePa ?? ulong.MaxValue;
if (firstPagePa != _cachedFirstPagePa || lastPagePa != _cachedLastPagePa)
{
if (partitionAfter != null && partitionAfter._firstPagePa.HasValue)
{
(MemoryBlock firstPageMemory, ulong firstPageOffset) = partitionAfter.GetFirstPageMemoryAndOffset();
_baseMemory.MapView(firstPageMemory, firstPageOffset, Size, _hostPageSize);
if (!useProtectionMirrors)
{
_baseMemory.Reprotect(Size, _hostPageSize, partitionAfter._firstPageMemoryProtection, throwOnFail: false);
}
_firstPageMemoryForUnmap = firstPageMemory;
_firstPageOffsetForLateMap = firstPageOffset;
}
else
{
MemoryBlock firstPageMemoryForUnmap = _firstPageMemoryForUnmap;
if (firstPageMemoryForUnmap != null)
{
_baseMemory.UnmapView(firstPageMemoryForUnmap, Size, _hostPageSize);
_firstPageMemoryForUnmap = null;
}
}
_cachedFirstPagePa = firstPagePa;
_cachedLastPagePa = lastPagePa;
}
}
public void ReprotectBridge(MemoryPermission protection)
{
if (_firstPageMemoryForUnmap != null)
{
_baseMemory.Reprotect(Size, _hostPageSize, protection, throwOnFail: false);
}
}
private (MemoryBlock, ulong) GetFirstPageMemoryAndOffset()
{
_treeLock.EnterReadLock();
try
{
PrivateMapping map = _privateTree.GetNode(Address);
if (map != null && map.PrivateAllocation.IsValid)
{
return (map.PrivateAllocation.Memory, map.PrivateAllocation.Offset + (Address - map.Address));
}
}
finally
{
_treeLock.ExitReadLock();
}
return (_backingMemory, _firstPagePa.Value);
}
public PrivateRange GetPrivateAllocation(ulong va)
{
_treeLock.EnterReadLock();
try
{
PrivateMapping map = _privateTree.GetNode(va);
if (map != null && map.PrivateAllocation.IsValid)
{
return new(map.PrivateAllocation.Memory, map.PrivateAllocation.Offset + (va - map.Address), map.Size - (va - map.Address));
}
}
finally
{
_treeLock.ExitReadLock();
}
return PrivateRange.Empty;
}
private void Update(ulong va, ulong pa, ulong size, MappingType type)
{
_treeLock.EnterWriteLock();
try
{
Mapping map = _mappingTree.GetNode(va);
Update(map, va, pa, size, type);
}
finally
{
_treeLock.ExitWriteLock();
}
}
private Mapping Update(Mapping map, ulong va, ulong pa, ulong size, MappingType type)
{
ulong endAddress = va + size;
for (; map != null; map = map.Successor)
{
if (map.Address < va)
{
_mappingTree.Add(map.Split(va));
}
if (map.EndAddress > endAddress)
{
Mapping newMap = map.Split(endAddress);
_mappingTree.Add(newMap);
map = newMap;
}
switch (type)
{
case MappingType.None:
ulong alignment = _hostPageSize;
bool unmappedBefore = map.Predecessor == null ||
(map.Predecessor.Type == MappingType.None && map.Predecessor.Address <= BitUtils.AlignDown(va, alignment));
bool unmappedAfter = map.Successor == null ||
(map.Successor.Type == MappingType.None && map.Successor.EndAddress >= BitUtils.AlignUp(endAddress, alignment));
UnmapPrivate(va, size, unmappedBefore, unmappedAfter);
break;
case MappingType.Private:
MapPrivate(va, size);
break;
}
map.UpdateState(type);
map = TryCoalesce(map);
if (map.EndAddress >= endAddress)
{
break;
}
}
return map;
}
private Mapping TryCoalesce(Mapping map)
{
Mapping previousMap = map.Predecessor;
Mapping nextMap = map.Successor;
if (previousMap != null && CanCoalesce(previousMap, map))
{
previousMap.Extend(map.Size);
_mappingTree.Remove(map);
map = previousMap;
}
if (nextMap != null && CanCoalesce(map, nextMap))
{
map.Extend(nextMap.Size);
_mappingTree.Remove(nextMap);
}
return map;
}
private static bool CanCoalesce(Mapping left, Mapping right)
{
return left.Type == right.Type;
}
private void MapPrivate(ulong va, ulong size)
{
ulong endAddress = va + size;
ulong alignment = _hostPageSize;
// Expand the range outwards based on page size to ensure that at least the requested region is mapped.
ulong vaAligned = BitUtils.AlignDown(va, alignment);
ulong endAddressAligned = BitUtils.AlignUp(endAddress, alignment);
PrivateMapping map = _privateTree.GetNode(va);
for (; map != null; map = map.Successor)
{
if (!map.PrivateAllocation.IsValid)
{
if (map.Address < vaAligned)
{
_privateTree.Add(map.Split(vaAligned));
}
if (map.EndAddress > endAddressAligned)
{
PrivateMapping newMap = map.Split(endAddressAligned);
_privateTree.Add(newMap);
map = newMap;
}
map.Map(_baseMemory, Address, _privateMemoryAllocator.Allocate(map.Size, _hostPageSize));
}
if (map.EndAddress >= endAddressAligned)
{
break;
}
}
}
private void UnmapPrivate(ulong va, ulong size, bool unmappedBefore, bool unmappedAfter)
{
ulong endAddress = va + size;
ulong alignment = _hostPageSize;
// If the adjacent mappings are unmapped, expand the range outwards,
// otherwise shrink it inwards. We must ensure we won't unmap pages that might still be in use.
ulong vaAligned = unmappedBefore ? BitUtils.AlignDown(va, alignment) : BitUtils.AlignUp(va, alignment);
ulong endAddressAligned = unmappedAfter ? BitUtils.AlignUp(endAddress, alignment) : BitUtils.AlignDown(endAddress, alignment);
if (endAddressAligned <= vaAligned)
{
return;
}
PrivateMapping map = _privateTree.GetNode(vaAligned);
for (; map != null; map = map.Successor)
{
if (map.PrivateAllocation.IsValid)
{
if (map.Address < vaAligned)
{
_privateTree.Add(map.Split(vaAligned));
}
if (map.EndAddress > endAddressAligned)
{
PrivateMapping newMap = map.Split(endAddressAligned);
_privateTree.Add(newMap);
map = newMap;
}
map.Unmap(_baseMemory, Address);
map = TryCoalesce(map);
}
if (map.EndAddress >= endAddressAligned)
{
break;
}
}
}
private PrivateMapping TryCoalesce(PrivateMapping map)
{
PrivateMapping previousMap = map.Predecessor;
PrivateMapping nextMap = map.Successor;
if (previousMap != null && CanCoalesce(previousMap, map))
{
previousMap.Extend(map.Size);
_privateTree.Remove(map);
map = previousMap;
}
if (nextMap != null && CanCoalesce(map, nextMap))
{
map.Extend(nextMap.Size);
_privateTree.Remove(nextMap);
}
return map;
}
private static bool CanCoalesce(PrivateMapping left, PrivateMapping right)
{
return !left.PrivateAllocation.IsValid && !right.PrivateAllocation.IsValid;
}
private void LateMap()
{
// Map all existing private allocations.
// This is necessary to ensure mirrors that are lazily created have the same mappings as the main one.
PrivateMapping map = _privateTree.GetNode(Address);
for (; map != null; map = map.Successor)
{
if (map.PrivateAllocation.IsValid)
{
_baseMemory.LateMapView(map.PrivateAllocation.Memory, map.PrivateAllocation.Offset, map.Address - Address, map.Size);
}
}
MemoryBlock firstPageMemory = _firstPageMemoryForUnmap;
ulong firstPageOffset = _firstPageOffsetForLateMap;
if (firstPageMemory != null)
{
_baseMemory.LateMapView(firstPageMemory, firstPageOffset, Size, _hostPageSize);
}
}
public PrivateRange GetFirstPrivateAllocation(ulong va, ulong size, out ulong nextVa)
{
_treeLock.EnterReadLock();
try
{
PrivateMapping map = _privateTree.GetNode(va);
nextVa = map.EndAddress;
if (map != null && map.PrivateAllocation.IsValid)
{
ulong startOffset = va - map.Address;
return new(
map.PrivateAllocation.Memory,
map.PrivateAllocation.Offset + startOffset,
Math.Min(map.PrivateAllocation.Size - startOffset, size));
}
}
finally
{
_treeLock.ExitReadLock();
}
return PrivateRange.Empty;
}
public bool HasPrivateAllocation(ulong va, ulong size, ulong startVa, ulong startSize, ref PrivateRange range)
{
ulong endVa = va + size;
_treeLock.EnterReadLock();
try
{
for (PrivateMapping map = _privateTree.GetNode(va); map != null && map.Address < endVa; map = map.Successor)
{
if (map.PrivateAllocation.IsValid)
{
if (map.Address <= startVa && map.EndAddress >= startVa + startSize)
{
ulong startOffset = startVa - map.Address;
range = new(
map.PrivateAllocation.Memory,
map.PrivateAllocation.Offset + startOffset,
Math.Min(map.PrivateAllocation.Size - startOffset, startSize));
}
return true;
}
}
}
finally
{
_treeLock.ExitReadLock();
}
return false;
}
public void Dispose()
{
GC.SuppressFinalize(this);
_privateMemoryAllocator.Dispose();
_baseMemory.Dispose();
}
}
}

View File

@@ -0,0 +1,202 @@
using Ryujinx.Common;
using Ryujinx.Common.Collections;
using Ryujinx.Memory;
using Ryujinx.Memory.Tracking;
using System;
namespace Ryujinx.Cpu.Jit.HostTracked
{
readonly struct AddressSpacePartitionAllocation : IDisposable
{
private readonly AddressSpacePartitionAllocator _owner;
private readonly PrivateMemoryAllocatorImpl<AddressSpacePartitionAllocator.Block>.Allocation _allocation;
public IntPtr Pointer => (IntPtr)((ulong)_allocation.Block.Memory.Pointer + _allocation.Offset);
public bool IsValid => _owner != null;
public AddressSpacePartitionAllocation(
AddressSpacePartitionAllocator owner,
PrivateMemoryAllocatorImpl<AddressSpacePartitionAllocator.Block>.Allocation allocation)
{
_owner = owner;
_allocation = allocation;
}
public void RegisterMapping(ulong va, ulong endVa)
{
_allocation.Block.AddMapping(_allocation.Offset, _allocation.Size, va, endVa);
}
public void MapView(MemoryBlock srcBlock, ulong srcOffset, ulong dstOffset, ulong size)
{
_allocation.Block.Memory.MapView(srcBlock, srcOffset, _allocation.Offset + dstOffset, size);
}
public void UnmapView(MemoryBlock srcBlock, ulong offset, ulong size)
{
_allocation.Block.Memory.UnmapView(srcBlock, _allocation.Offset + offset, size);
}
public void Reprotect(ulong offset, ulong size, MemoryPermission permission, bool throwOnFail)
{
_allocation.Block.Memory.Reprotect(_allocation.Offset + offset, size, permission, throwOnFail);
}
public IntPtr GetPointer(ulong offset, ulong size)
{
return _allocation.Block.Memory.GetPointer(_allocation.Offset + offset, size);
}
public void Dispose()
{
_allocation.Block.RemoveMapping(_allocation.Offset, _allocation.Size);
_owner.Free(_allocation.Block, _allocation.Offset, _allocation.Size);
}
}
class AddressSpacePartitionAllocator : PrivateMemoryAllocatorImpl<AddressSpacePartitionAllocator.Block>
{
private const ulong DefaultBlockAlignment = 1UL << 32; // 4GB
public class Block : PrivateMemoryAllocator.Block
{
private readonly MemoryTracking _tracking;
private readonly Func<ulong, ulong> _readPtCallback;
private readonly MemoryEhMeilleure _memoryEh;
private class Mapping : IntrusiveRedBlackTreeNode<Mapping>, IComparable<Mapping>, IComparable<ulong>
{
public ulong Address { get; }
public ulong Size { get; }
public ulong EndAddress => Address + Size;
public ulong Va { get; }
public ulong EndVa { get; }
public Mapping(ulong address, ulong size, ulong va, ulong endVa)
{
Address = address;
Size = size;
Va = va;
EndVa = endVa;
}
public int CompareTo(Mapping other)
{
if (Address < other.Address)
{
return -1;
}
else if (Address <= other.EndAddress - 1UL)
{
return 0;
}
else
{
return 1;
}
}
public int CompareTo(ulong address)
{
if (address < Address)
{
return -1;
}
else if (address <= EndAddress - 1UL)
{
return 0;
}
else
{
return 1;
}
}
}
private readonly AddressIntrusiveRedBlackTree<Mapping> _mappingTree;
private readonly object _lock;
public Block(MemoryTracking tracking, Func<ulong, ulong> readPtCallback, MemoryBlock memory, ulong size, object locker) : base(memory, size)
{
_tracking = tracking;
_readPtCallback = readPtCallback;
_memoryEh = new(memory, null, tracking, VirtualMemoryEvent);
_mappingTree = new();
_lock = locker;
}
public void AddMapping(ulong offset, ulong size, ulong va, ulong endVa)
{
_mappingTree.Add(new(offset, size, va, endVa));
}
public void RemoveMapping(ulong offset, ulong size)
{
_mappingTree.Remove(_mappingTree.GetNode(offset));
}
private ulong VirtualMemoryEvent(ulong address, ulong size, bool write)
{
Mapping map;
lock (_lock)
{
map = _mappingTree.GetNode(address);
}
if (map == null)
{
return 0;
}
address -= map.Address;
ulong addressAligned = BitUtils.AlignDown(address, AddressSpacePartition.GuestPageSize);
ulong endAddressAligned = BitUtils.AlignUp(address + size, AddressSpacePartition.GuestPageSize);
ulong sizeAligned = endAddressAligned - addressAligned;
if (!_tracking.VirtualMemoryEvent(map.Va + addressAligned, sizeAligned, write))
{
return 0;
}
return _readPtCallback(map.Va + address);
}
public override void Destroy()
{
_memoryEh.Dispose();
base.Destroy();
}
}
private readonly MemoryTracking _tracking;
private readonly Func<ulong, ulong> _readPtCallback;
private readonly object _lock;
public AddressSpacePartitionAllocator(
MemoryTracking tracking,
Func<ulong, ulong> readPtCallback,
object locker) : base(DefaultBlockAlignment, MemoryAllocationFlags.Reserve | MemoryAllocationFlags.ViewCompatible)
{
_tracking = tracking;
_readPtCallback = readPtCallback;
_lock = locker;
}
public AddressSpacePartitionAllocation Allocate(ulong va, ulong size)
{
AddressSpacePartitionAllocation allocation = new(this, Allocate(size, MemoryBlock.GetPageSize(), CreateBlock));
allocation.RegisterMapping(va, va + size);
return allocation;
}
private Block CreateBlock(MemoryBlock memory, ulong size)
{
return new Block(_tracking, _readPtCallback, memory, size, _lock);
}
}
}

View File

@@ -0,0 +1,101 @@
using Ryujinx.Memory;
using System;
using System.Diagnostics;
namespace Ryujinx.Cpu.Jit.HostTracked
{
class AddressSpacePartitionMultiAllocation : IDisposable
{
private readonly AddressSpacePartitionAllocation _baseMemory;
private AddressSpacePartitionAllocation _baseMemoryRo;
private AddressSpacePartitionAllocation _baseMemoryNone;
public AddressSpacePartitionMultiAllocation(AddressSpacePartitionAllocation baseMemory)
{
_baseMemory = baseMemory;
}
public void MapView(MemoryBlock srcBlock, ulong srcOffset, ulong dstOffset, ulong size)
{
_baseMemory.MapView(srcBlock, srcOffset, dstOffset, size);
if (_baseMemoryRo.IsValid)
{
_baseMemoryRo.MapView(srcBlock, srcOffset, dstOffset, size);
_baseMemoryRo.Reprotect(dstOffset, size, MemoryPermission.Read, false);
}
}
public void LateMapView(MemoryBlock srcBlock, ulong srcOffset, ulong dstOffset, ulong size)
{
_baseMemoryRo.MapView(srcBlock, srcOffset, dstOffset, size);
_baseMemoryRo.Reprotect(dstOffset, size, MemoryPermission.Read, false);
}
public void UnmapView(MemoryBlock srcBlock, ulong offset, ulong size)
{
_baseMemory.UnmapView(srcBlock, offset, size);
if (_baseMemoryRo.IsValid)
{
_baseMemoryRo.UnmapView(srcBlock, offset, size);
}
}
public void Reprotect(ulong offset, ulong size, MemoryPermission permission, bool throwOnFail)
{
_baseMemory.Reprotect(offset, size, permission, throwOnFail);
}
public IntPtr GetPointer(ulong offset, ulong size)
{
return _baseMemory.GetPointer(offset, size);
}
public bool LazyInitMirrorForProtection(AddressSpacePartitioned addressSpace, ulong blockAddress, ulong blockSize, MemoryPermission permission)
{
if (permission == MemoryPermission.None && !_baseMemoryNone.IsValid)
{
_baseMemoryNone = addressSpace.CreateAsPartitionAllocation(blockAddress, blockSize);
}
else if (permission == MemoryPermission.Read && !_baseMemoryRo.IsValid)
{
_baseMemoryRo = addressSpace.CreateAsPartitionAllocation(blockAddress, blockSize);
return true;
}
return false;
}
public IntPtr GetPointerForProtection(ulong offset, ulong size, MemoryPermission permission)
{
AddressSpacePartitionAllocation allocation = permission switch
{
MemoryPermission.ReadAndWrite => _baseMemory,
MemoryPermission.Read => _baseMemoryRo,
MemoryPermission.None => _baseMemoryNone,
_ => throw new ArgumentException($"Invalid protection \"{permission}\"."),
};
Debug.Assert(allocation.IsValid);
return allocation.GetPointer(offset, size);
}
public void Dispose()
{
_baseMemory.Dispose();
if (_baseMemoryRo.IsValid)
{
_baseMemoryRo.Dispose();
}
if (_baseMemoryNone.IsValid)
{
_baseMemoryNone.Dispose();
}
}
}
}

View File

@@ -0,0 +1,407 @@
using Ryujinx.Common;
using Ryujinx.Memory;
using Ryujinx.Memory.Tracking;
using System;
using System.Collections.Generic;
using System.Diagnostics;
namespace Ryujinx.Cpu.Jit.HostTracked
{
class AddressSpacePartitioned : IDisposable
{
private const int PartitionBits = 25;
private const ulong PartitionSize = 1UL << PartitionBits;
private readonly MemoryBlock _backingMemory;
private readonly List<AddressSpacePartition> _partitions;
private readonly AddressSpacePartitionAllocator _asAllocator;
private readonly Action<ulong, IntPtr, ulong> _updatePtCallback;
private readonly bool _useProtectionMirrors;
public AddressSpacePartitioned(MemoryTracking tracking, MemoryBlock backingMemory, NativePageTable nativePageTable, bool useProtectionMirrors)
{
_backingMemory = backingMemory;
_partitions = new();
_asAllocator = new(tracking, nativePageTable.Read, _partitions);
_updatePtCallback = nativePageTable.Update;
_useProtectionMirrors = useProtectionMirrors;
}
public void Map(ulong va, ulong pa, ulong size)
{
ulong endVa = va + size;
lock (_partitions)
{
EnsurePartitionsLocked(va, size);
while (va < endVa)
{
int partitionIndex = FindPartitionIndexLocked(va);
AddressSpacePartition partition = _partitions[partitionIndex];
(ulong clampedVa, ulong clampedEndVa) = ClampRange(partition, va, endVa);
partition.Map(clampedVa, pa, clampedEndVa - clampedVa);
ulong currentSize = clampedEndVa - clampedVa;
va += currentSize;
pa += currentSize;
InsertOrRemoveBridgeIfNeeded(partitionIndex);
}
}
}
public void Unmap(ulong va, ulong size)
{
ulong endVa = va + size;
while (va < endVa)
{
AddressSpacePartition partition;
lock (_partitions)
{
int partitionIndex = FindPartitionIndexLocked(va);
if (partitionIndex < 0)
{
va += PartitionSize - (va & (PartitionSize - 1));
continue;
}
partition = _partitions[partitionIndex];
(ulong clampedVa, ulong clampedEndVa) = ClampRange(partition, va, endVa);
partition.Unmap(clampedVa, clampedEndVa - clampedVa);
va += clampedEndVa - clampedVa;
InsertOrRemoveBridgeIfNeeded(partitionIndex);
if (partition.IsEmpty())
{
_partitions.Remove(partition);
partition.Dispose();
}
}
}
}
public void Reprotect(ulong va, ulong size, MemoryPermission protection)
{
ulong endVa = va + size;
lock (_partitions)
{
while (va < endVa)
{
AddressSpacePartition partition = FindPartitionWithIndex(va, out int partitionIndex);
if (partition == null)
{
va += PartitionSize - (va & (PartitionSize - 1));
continue;
}
(ulong clampedVa, ulong clampedEndVa) = ClampRange(partition, va, endVa);
if (_useProtectionMirrors)
{
partition.Reprotect(clampedVa, clampedEndVa - clampedVa, protection, this, _updatePtCallback);
}
else
{
partition.ReprotectAligned(clampedVa, clampedEndVa - clampedVa, protection);
if (clampedVa == partition.Address &&
partitionIndex > 0 &&
_partitions[partitionIndex - 1].EndAddress == partition.Address)
{
_partitions[partitionIndex - 1].ReprotectBridge(protection);
}
}
va += clampedEndVa - clampedVa;
}
}
}
public PrivateRange GetPrivateAllocation(ulong va)
{
AddressSpacePartition partition = FindPartition(va);
if (partition == null)
{
return PrivateRange.Empty;
}
return partition.GetPrivateAllocation(va);
}
public PrivateRange GetFirstPrivateAllocation(ulong va, ulong size, out ulong nextVa)
{
AddressSpacePartition partition = FindPartition(va);
if (partition == null)
{
nextVa = (va & ~(PartitionSize - 1)) + PartitionSize;
return PrivateRange.Empty;
}
return partition.GetFirstPrivateAllocation(va, size, out nextVa);
}
public bool HasAnyPrivateAllocation(ulong va, ulong size, out PrivateRange range)
{
range = PrivateRange.Empty;
ulong startVa = va;
ulong endVa = va + size;
while (va < endVa)
{
AddressSpacePartition partition = FindPartition(va);
if (partition == null)
{
va += PartitionSize - (va & (PartitionSize - 1));
continue;
}
(ulong clampedVa, ulong clampedEndVa) = ClampRange(partition, va, endVa);
if (partition.HasPrivateAllocation(clampedVa, clampedEndVa - clampedVa, startVa, size, ref range))
{
return true;
}
va += clampedEndVa - clampedVa;
}
return false;
}
private void InsertOrRemoveBridgeIfNeeded(int partitionIndex)
{
if (partitionIndex > 0)
{
if (_partitions[partitionIndex - 1].EndAddress == _partitions[partitionIndex].Address)
{
_partitions[partitionIndex - 1].InsertBridgeAtEnd(_partitions[partitionIndex], _useProtectionMirrors);
}
else
{
_partitions[partitionIndex - 1].InsertBridgeAtEnd(null, _useProtectionMirrors);
}
}
if (partitionIndex + 1 < _partitions.Count && _partitions[partitionIndex].EndAddress == _partitions[partitionIndex + 1].Address)
{
_partitions[partitionIndex].InsertBridgeAtEnd(_partitions[partitionIndex + 1], _useProtectionMirrors);
}
else
{
_partitions[partitionIndex].InsertBridgeAtEnd(null, _useProtectionMirrors);
}
}
public IntPtr GetPointer(ulong va, ulong size)
{
AddressSpacePartition partition = FindPartition(va);
return partition.GetPointer(va, size);
}
private static (ulong, ulong) ClampRange(AddressSpacePartition partition, ulong va, ulong endVa)
{
if (va < partition.Address)
{
va = partition.Address;
}
if (endVa > partition.EndAddress)
{
endVa = partition.EndAddress;
}
return (va, endVa);
}
private AddressSpacePartition FindPartition(ulong va)
{
lock (_partitions)
{
int index = FindPartitionIndexLocked(va);
if (index >= 0)
{
return _partitions[index];
}
}
return null;
}
private AddressSpacePartition FindPartitionWithIndex(ulong va, out int index)
{
lock (_partitions)
{
index = FindPartitionIndexLocked(va);
if (index >= 0)
{
return _partitions[index];
}
}
return null;
}
private int FindPartitionIndexLocked(ulong va)
{
int left = 0;
int middle;
int right = _partitions.Count - 1;
while (left <= right)
{
middle = left + ((right - left) >> 1);
AddressSpacePartition partition = _partitions[middle];
if (partition.Address <= va && partition.EndAddress > va)
{
return middle;
}
if (partition.Address >= va)
{
right = middle - 1;
}
else
{
left = middle + 1;
}
}
return -1;
}
private void EnsurePartitionsLocked(ulong va, ulong size)
{
ulong endVa = BitUtils.AlignUp(va + size, PartitionSize);
va = BitUtils.AlignDown(va, PartitionSize);
for (int i = 0; i < _partitions.Count && va < endVa; i++)
{
AddressSpacePartition partition = _partitions[i];
if (partition.Address <= va && partition.EndAddress > va)
{
if (partition.EndAddress >= endVa)
{
// Fully mapped already.
va = endVa;
break;
}
ulong gapSize;
if (i + 1 < _partitions.Count)
{
AddressSpacePartition nextPartition = _partitions[i + 1];
if (partition.EndAddress == nextPartition.Address)
{
va = partition.EndAddress;
continue;
}
gapSize = Math.Min(endVa, nextPartition.Address) - partition.EndAddress;
}
else
{
gapSize = endVa - partition.EndAddress;
}
_partitions.Insert(i + 1, CreateAsPartition(partition.EndAddress, gapSize));
va = partition.EndAddress + gapSize;
i++;
}
else if (partition.EndAddress > va)
{
Debug.Assert(partition.Address > va);
ulong gapSize;
if (partition.Address < endVa)
{
gapSize = partition.Address - va;
}
else
{
gapSize = endVa - va;
}
_partitions.Insert(i, CreateAsPartition(va, gapSize));
va = Math.Min(partition.EndAddress, endVa);
i++;
}
}
if (va < endVa)
{
_partitions.Add(CreateAsPartition(va, endVa - va));
}
ValidatePartitionList();
}
[Conditional("DEBUG")]
private void ValidatePartitionList()
{
for (int i = 1; i < _partitions.Count; i++)
{
Debug.Assert(_partitions[i].Address > _partitions[i - 1].Address);
Debug.Assert(_partitions[i].EndAddress > _partitions[i - 1].EndAddress);
}
}
private AddressSpacePartition CreateAsPartition(ulong va, ulong size)
{
return new(CreateAsPartitionAllocation(va, size), _backingMemory, va, size);
}
public AddressSpacePartitionAllocation CreateAsPartitionAllocation(ulong va, ulong size)
{
return _asAllocator.Allocate(va, size + MemoryBlock.GetPageSize());
}
protected virtual void Dispose(bool disposing)
{
if (disposing)
{
foreach (AddressSpacePartition partition in _partitions)
{
partition.Dispose();
}
_partitions.Clear();
_asAllocator.Dispose();
}
}
public void Dispose()
{
Dispose(disposing: true);
GC.SuppressFinalize(this);
}
}
}

View File

@@ -0,0 +1,223 @@
using Ryujinx.Cpu.Signal;
using Ryujinx.Memory;
using System;
using System.Diagnostics;
using System.Numerics;
using System.Runtime.InteropServices;
namespace Ryujinx.Cpu.Jit.HostTracked
{
sealed class NativePageTable : IDisposable
{
private delegate ulong TrackingEventDelegate(ulong address, ulong size, bool write);
private const int PageBits = 12;
private const int PageSize = 1 << PageBits;
private const int PageMask = PageSize - 1;
private const int PteSize = 8;
private readonly int _bitsPerPtPage;
private readonly int _entriesPerPtPage;
private readonly int _pageCommitmentBits;
private readonly PageTable<ulong> _pageTable;
private readonly MemoryBlock _nativePageTable;
private readonly ulong[] _pageCommitmentBitmap;
private readonly ulong _hostPageSize;
private readonly TrackingEventDelegate _trackingEvent;
private bool _disposed;
public IntPtr PageTablePointer => _nativePageTable.Pointer;
public NativePageTable(ulong asSize)
{
ulong hostPageSize = MemoryBlock.GetPageSize();
_entriesPerPtPage = (int)(hostPageSize / sizeof(ulong));
_bitsPerPtPage = BitOperations.Log2((uint)_entriesPerPtPage);
_pageCommitmentBits = PageBits + _bitsPerPtPage;
_hostPageSize = hostPageSize;
_pageTable = new PageTable<ulong>();
_nativePageTable = new MemoryBlock((asSize / PageSize) * PteSize + _hostPageSize, MemoryAllocationFlags.Reserve);
_pageCommitmentBitmap = new ulong[(asSize >> _pageCommitmentBits) / (sizeof(ulong) * 8)];
ulong ptStart = (ulong)_nativePageTable.Pointer;
ulong ptEnd = ptStart + _nativePageTable.Size;
_trackingEvent = VirtualMemoryEvent;
bool added = NativeSignalHandler.AddTrackedRegion((nuint)ptStart, (nuint)ptEnd, Marshal.GetFunctionPointerForDelegate(_trackingEvent));
if (!added)
{
throw new InvalidOperationException("Number of allowed tracked regions exceeded.");
}
}
public void Map(ulong va, ulong pa, ulong size, AddressSpacePartitioned addressSpace, MemoryBlock backingMemory, bool privateMap)
{
while (size != 0)
{
_pageTable.Map(va, pa);
EnsureCommitment(va);
if (privateMap)
{
_nativePageTable.Write((va / PageSize) * PteSize, GetPte(va, addressSpace.GetPointer(va, PageSize)));
}
else
{
_nativePageTable.Write((va / PageSize) * PteSize, GetPte(va, backingMemory.GetPointer(pa, PageSize)));
}
va += PageSize;
pa += PageSize;
size -= PageSize;
}
}
public void Unmap(ulong va, ulong size)
{
IntPtr guardPagePtr = GetGuardPagePointer();
while (size != 0)
{
_pageTable.Unmap(va);
_nativePageTable.Write((va / PageSize) * PteSize, GetPte(va, guardPagePtr));
va += PageSize;
size -= PageSize;
}
}
public ulong Read(ulong va)
{
ulong pte = _nativePageTable.Read<ulong>((va / PageSize) * PteSize);
pte += va & ~(ulong)PageMask;
return pte + (va & PageMask);
}
public void Update(ulong va, IntPtr ptr, ulong size)
{
ulong remainingSize = size;
while (remainingSize != 0)
{
EnsureCommitment(va);
_nativePageTable.Write((va / PageSize) * PteSize, GetPte(va, ptr));
va += PageSize;
ptr += PageSize;
remainingSize -= PageSize;
}
}
private void EnsureCommitment(ulong va)
{
ulong bit = va >> _pageCommitmentBits;
int index = (int)(bit / (sizeof(ulong) * 8));
int shift = (int)(bit % (sizeof(ulong) * 8));
ulong mask = 1UL << shift;
ulong oldMask = _pageCommitmentBitmap[index];
if ((oldMask & mask) == 0)
{
lock (_pageCommitmentBitmap)
{
oldMask = _pageCommitmentBitmap[index];
if ((oldMask & mask) != 0)
{
return;
}
_nativePageTable.Commit(bit * _hostPageSize, _hostPageSize);
Span<ulong> pageSpan = MemoryMarshal.Cast<byte, ulong>(_nativePageTable.GetSpan(bit * _hostPageSize, (int)_hostPageSize));
Debug.Assert(pageSpan.Length == _entriesPerPtPage);
IntPtr guardPagePtr = GetGuardPagePointer();
for (int i = 0; i < pageSpan.Length; i++)
{
pageSpan[i] = GetPte((bit << _pageCommitmentBits) | ((ulong)i * PageSize), guardPagePtr);
}
_pageCommitmentBitmap[index] = oldMask | mask;
}
}
}
private IntPtr GetGuardPagePointer()
{
return _nativePageTable.GetPointer(_nativePageTable.Size - _hostPageSize, _hostPageSize);
}
private static ulong GetPte(ulong va, IntPtr ptr)
{
Debug.Assert((va & PageMask) == 0);
return (ulong)ptr - va;
}
public ulong GetPhysicalAddress(ulong va)
{
return _pageTable.Read(va) + (va & PageMask);
}
private ulong VirtualMemoryEvent(ulong address, ulong size, bool write)
{
if (address < _nativePageTable.Size - _hostPageSize)
{
// Some prefetch instructions do not cause faults with invalid addresses.
// Retry if we are hitting a case where the page table is unmapped, the next
// run will execute the actual instruction.
// The address loaded from the page table will be invalid, and it should hit the else case
// if the instruction faults on unmapped or protected memory.
ulong va = address * (PageSize / sizeof(ulong));
EnsureCommitment(va);
return (ulong)_nativePageTable.Pointer + address;
}
else
{
throw new InvalidMemoryRegionException();
}
}
private void Dispose(bool disposing)
{
if (!_disposed)
{
if (disposing)
{
NativeSignalHandler.RemoveTrackedRegion((nuint)_nativePageTable.Pointer);
_nativePageTable.Dispose();
}
_disposed = true;
}
}
public void Dispose()
{
Dispose(disposing: true);
GC.SuppressFinalize(this);
}
}
}

View File

@@ -15,9 +15,9 @@ namespace Ryujinx.Cpu.Jit
_tickSource = tickSource;
_translator = new Translator(new JitMemoryAllocator(forJit: true), memory, for64Bit);
if (memory.Type.IsHostMapped())
if (memory.Type.IsHostMappedOrTracked())
{
NativeSignalHandler.InitializeSignalHandler(MemoryBlock.GetPageSize());
NativeSignalHandler.InitializeSignalHandler();
}
memory.UnmapEvent += UnmapHandler;

View File

@@ -3,6 +3,7 @@ using Ryujinx.Memory;
using Ryujinx.Memory.Range;
using Ryujinx.Memory.Tracking;
using System;
using System.Buffers;
using System.Collections.Generic;
using System.Linq;
using System.Runtime.CompilerServices;
@@ -14,7 +15,7 @@ namespace Ryujinx.Cpu.Jit
/// <summary>
/// Represents a CPU memory manager.
/// </summary>
public sealed class MemoryManager : VirtualMemoryManagerRefCountedBase<ulong, ulong>, IMemoryManager, IVirtualMemoryManagerTracked, IWritableBlock
public sealed class MemoryManager : VirtualMemoryManagerRefCountedBase, IMemoryManager, IVirtualMemoryManagerTracked
{
private const int PteSize = 8;
@@ -24,7 +25,7 @@ namespace Ryujinx.Cpu.Jit
private readonly InvalidAccessHandler _invalidAccessHandler;
/// <inheritdoc/>
public bool Supports4KBPages => true;
public bool UsesPrivateAllocations => false;
/// <summary>
/// Address space width in bits.
@@ -33,6 +34,8 @@ namespace Ryujinx.Cpu.Jit
private readonly MemoryBlock _pageTable;
private readonly ManagedPageFlags _pages;
/// <summary>
/// Page table base pointer.
/// </summary>
@@ -70,6 +73,8 @@ namespace Ryujinx.Cpu.Jit
AddressSpaceSize = asSize;
_pageTable = new MemoryBlock((asSize / PageSize) * PteSize);
_pages = new ManagedPageFlags(AddressSpaceBits);
Tracking = new MemoryTracking(this, PageSize);
}
@@ -89,15 +94,10 @@ namespace Ryujinx.Cpu.Jit
remainingSize -= PageSize;
}
_pages.AddMapping(oVa, size);
Tracking.Map(oVa, size);
}
/// <inheritdoc/>
public void MapForeign(ulong va, nuint hostPointer, ulong size)
{
throw new NotSupportedException();
}
/// <inheritdoc/>
public void Unmap(ulong va, ulong size)
{
@@ -111,6 +111,7 @@ namespace Ryujinx.Cpu.Jit
UnmapEvent?.Invoke(va, size);
Tracking.Unmap(va, size);
_pages.RemoveMapping(va, size);
ulong remainingSize = size;
while (remainingSize != 0)
@@ -122,18 +123,29 @@ namespace Ryujinx.Cpu.Jit
}
}
/// <inheritdoc/>
public T Read<T>(ulong va) where T : unmanaged
{
return MemoryMarshal.Cast<byte, T>(GetSpan(va, Unsafe.SizeOf<T>()))[0];
}
/// <inheritdoc/>
public T ReadTracked<T>(ulong va) where T : unmanaged
public override T ReadTracked<T>(ulong va)
{
try
{
SignalMemoryTracking(va, (ulong)Unsafe.SizeOf<T>(), false);
return base.ReadTracked<T>(va);
}
catch (InvalidMemoryRegionException)
{
if (_invalidAccessHandler == null || !_invalidAccessHandler(va))
{
throw;
}
return default;
}
}
/// <inheritdoc/>
public T ReadGuest<T>(ulong va) where T : unmanaged
{
try
{
SignalMemoryTrackingImpl(va, (ulong)Unsafe.SizeOf<T>(), false, true);
return Read<T>(va);
}
@@ -164,107 +176,11 @@ namespace Ryujinx.Cpu.Jit
}
}
/// <inheritdoc/>
public void Write<T>(ulong va, T value) where T : unmanaged
{
Write(va, MemoryMarshal.Cast<T, byte>(MemoryMarshal.CreateSpan(ref value, 1)));
}
/// <inheritdoc/>
public void Write(ulong va, ReadOnlySpan<byte> data)
{
if (data.Length == 0)
{
return;
}
SignalMemoryTracking(va, (ulong)data.Length, true);
WriteImpl(va, data);
}
/// <inheritdoc/>
public void WriteUntracked(ulong va, ReadOnlySpan<byte> data)
{
if (data.Length == 0)
{
return;
}
WriteImpl(va, data);
}
/// <inheritdoc/>
public bool WriteWithRedundancyCheck(ulong va, ReadOnlySpan<byte> data)
{
if (data.Length == 0)
{
return false;
}
SignalMemoryTracking(va, (ulong)data.Length, false);
if (IsContiguousAndMapped(va, data.Length))
{
var target = _backingMemory.GetSpan(GetPhysicalAddressInternal(va), data.Length);
bool changed = !data.SequenceEqual(target);
if (changed)
{
data.CopyTo(target);
}
return changed;
}
else
{
WriteImpl(va, data);
return true;
}
}
/// <summary>
/// Writes data to CPU mapped memory.
/// </summary>
/// <param name="va">Virtual address to write the data into</param>
/// <param name="data">Data to be written</param>
[MethodImpl(MethodImplOptions.AggressiveInlining)]
private void WriteImpl(ulong va, ReadOnlySpan<byte> data)
public override void Write(ulong va, ReadOnlySpan<byte> data)
{
try
{
AssertValidAddressAndSize(va, (ulong)data.Length);
if (IsContiguousAndMapped(va, data.Length))
{
data.CopyTo(_backingMemory.GetSpan(GetPhysicalAddressInternal(va), data.Length));
}
else
{
int offset = 0, size;
if ((va & PageMask) != 0)
{
ulong pa = GetPhysicalAddressInternal(va);
size = Math.Min(data.Length, PageSize - (int)(va & PageMask));
data[..size].CopyTo(_backingMemory.GetSpan(pa, size));
offset += size;
}
for (; offset < data.Length; offset += size)
{
ulong pa = GetPhysicalAddressInternal(va + (ulong)offset);
size = Math.Min(data.Length - offset, PageSize);
data.Slice(offset, size).CopyTo(_backingMemory.GetSpan(pa, size));
}
}
base.Write(va, data);
}
catch (InvalidMemoryRegionException)
{
@@ -276,60 +192,47 @@ namespace Ryujinx.Cpu.Jit
}
/// <inheritdoc/>
public ReadOnlySpan<byte> GetSpan(ulong va, int size, bool tracked = false)
public void WriteGuest<T>(ulong va, T value) where T : unmanaged
{
if (size == 0)
Span<byte> data = MemoryMarshal.Cast<T, byte>(MemoryMarshal.CreateSpan(ref value, 1));
SignalMemoryTrackingImpl(va, (ulong)data.Length, true, true);
Write(va, data);
}
public override void WriteUntracked(ulong va, ReadOnlySpan<byte> data)
{
try
{
return ReadOnlySpan<byte>.Empty;
base.WriteUntracked(va, data);
}
if (tracked)
catch (InvalidMemoryRegionException)
{
SignalMemoryTracking(va, (ulong)size, false);
}
if (IsContiguousAndMapped(va, size))
{
return _backingMemory.GetSpan(GetPhysicalAddressInternal(va), size);
}
else
{
Span<byte> data = new byte[size];
base.Read(va, data);
return data;
if (_invalidAccessHandler == null || !_invalidAccessHandler(va))
{
throw;
}
}
}
/// <inheritdoc/>
public WritableRegion GetWritableRegion(ulong va, int size, bool tracked = false)
public override ReadOnlySequence<byte> GetReadOnlySequence(ulong va, int size, bool tracked = false)
{
if (size == 0)
try
{
return new WritableRegion(null, va, Memory<byte>.Empty);
return base.GetReadOnlySequence(va, size, tracked);
}
if (IsContiguousAndMapped(va, size))
catch (InvalidMemoryRegionException)
{
if (tracked)
if (_invalidAccessHandler == null || !_invalidAccessHandler(va))
{
SignalMemoryTracking(va, (ulong)size, true);
throw;
}
return new WritableRegion(null, va, _backingMemory.GetMemory(GetPhysicalAddressInternal(va), size));
}
else
{
Memory<byte> memory = new byte[size];
GetSpan(va, size).CopyTo(memory.Span);
return new WritableRegion(this, va, memory, tracked);
return ReadOnlySequence<byte>.Empty;
}
}
/// <inheritdoc/>
public ref T GetRef<T>(ulong va) where T : unmanaged
{
if (!IsContiguous(va, Unsafe.SizeOf<T>()))
@@ -342,56 +245,6 @@ namespace Ryujinx.Cpu.Jit
return ref _backingMemory.GetRef<T>(GetPhysicalAddressInternal(va));
}
/// <summary>
/// Computes the number of pages in a virtual address range.
/// </summary>
/// <param name="va">Virtual address of the range</param>
/// <param name="size">Size of the range</param>
/// <param name="startVa">The virtual address of the beginning of the first page</param>
/// <remarks>This function does not differentiate between allocated and unallocated pages.</remarks>
[MethodImpl(MethodImplOptions.AggressiveInlining)]
private static int GetPagesCount(ulong va, uint size, out ulong startVa)
{
// WARNING: Always check if ulong does not overflow during the operations.
startVa = va & ~(ulong)PageMask;
ulong vaSpan = (va - startVa + size + PageMask) & ~(ulong)PageMask;
return (int)(vaSpan / PageSize);
}
private static void ThrowMemoryNotContiguous() => throw new MemoryNotContiguousException();
[MethodImpl(MethodImplOptions.AggressiveInlining)]
private bool IsContiguousAndMapped(ulong va, int size) => IsContiguous(va, size) && IsMapped(va);
[MethodImpl(MethodImplOptions.AggressiveInlining)]
private bool IsContiguous(ulong va, int size)
{
if (!ValidateAddress(va) || !ValidateAddressAndSize(va, (ulong)size))
{
return false;
}
int pages = GetPagesCount(va, (uint)size, out va);
for (int page = 0; page < pages - 1; page++)
{
if (!ValidateAddress(va + PageSize))
{
return false;
}
if (GetPhysicalAddressInternal(va) + PageSize != GetPhysicalAddressInternal(va + PageSize))
{
return false;
}
va += PageSize;
}
return true;
}
/// <inheritdoc/>
public IEnumerable<HostMemoryRange> GetHostRegions(ulong va, ulong size)
{
@@ -496,9 +349,8 @@ namespace Ryujinx.Cpu.Jit
return true;
}
/// <inheritdoc/>
[MethodImpl(MethodImplOptions.AggressiveInlining)]
public bool IsMapped(ulong va)
public override bool IsMapped(ulong va)
{
if (!ValidateAddress(va))
{
@@ -508,9 +360,9 @@ namespace Ryujinx.Cpu.Jit
return _pageTable.Read<ulong>((va / PageSize) * PteSize) != 0;
}
private ulong GetPhysicalAddressInternal(ulong va)
private nuint GetPhysicalAddressInternal(ulong va)
{
return PteToPa(_pageTable.Read<ulong>((va / PageSize) * PteSize) & ~(0xffffUL << 48)) + (va & PageMask);
return (nuint)(PteToPa(_pageTable.Read<ulong>((va / PageSize) * PteSize) & ~(0xffffUL << 48)) + (va & PageMask));
}
/// <inheritdoc/>
@@ -520,50 +372,57 @@ namespace Ryujinx.Cpu.Jit
}
/// <inheritdoc/>
public void TrackingReprotect(ulong va, ulong size, MemoryPermission protection)
public void TrackingReprotect(ulong va, ulong size, MemoryPermission protection, bool guest)
{
AssertValidAddressAndSize(va, size);
// Protection is inverted on software pages, since the default value is 0.
protection = (~protection) & MemoryPermission.ReadAndWrite;
long tag = protection switch
if (guest)
{
MemoryPermission.None => 0L,
MemoryPermission.Write => 2L << PointerTagBit,
_ => 3L << PointerTagBit,
};
// Protection is inverted on software pages, since the default value is 0.
protection = (~protection) & MemoryPermission.ReadAndWrite;
int pages = GetPagesCount(va, (uint)size, out va);
ulong pageStart = va >> PageBits;
long invTagMask = ~(0xffffL << 48);
for (int page = 0; page < pages; page++)
{
ref long pageRef = ref _pageTable.GetRef<long>(pageStart * PteSize);
long pte;
do
long tag = protection switch
{
pte = Volatile.Read(ref pageRef);
}
while (pte != 0 && Interlocked.CompareExchange(ref pageRef, (pte & invTagMask) | tag, pte) != pte);
MemoryPermission.None => 0L,
MemoryPermission.Write => 2L << PointerTagBit,
_ => 3L << PointerTagBit,
};
pageStart++;
int pages = GetPagesCount(va, (uint)size, out va);
ulong pageStart = va >> PageBits;
long invTagMask = ~(0xffffL << 48);
for (int page = 0; page < pages; page++)
{
ref long pageRef = ref _pageTable.GetRef<long>(pageStart * PteSize);
long pte;
do
{
pte = Volatile.Read(ref pageRef);
}
while (pte != 0 && Interlocked.CompareExchange(ref pageRef, (pte & invTagMask) | tag, pte) != pte);
pageStart++;
}
}
else
{
_pages.TrackingReprotect(va, size, protection);
}
}
/// <inheritdoc/>
public RegionHandle BeginTracking(ulong address, ulong size, int id)
public RegionHandle BeginTracking(ulong address, ulong size, int id, RegionFlags flags = RegionFlags.None)
{
return Tracking.BeginTracking(address, size, id);
return Tracking.BeginTracking(address, size, id, flags);
}
/// <inheritdoc/>
public MultiRegionHandle BeginGranularTracking(ulong address, ulong size, IEnumerable<IRegionHandle> handles, ulong granularity, int id)
public MultiRegionHandle BeginGranularTracking(ulong address, ulong size, IEnumerable<IRegionHandle> handles, ulong granularity, int id, RegionFlags flags = RegionFlags.None)
{
return Tracking.BeginGranularTracking(address, size, handles, granularity, id);
return Tracking.BeginGranularTracking(address, size, handles, granularity, id, flags);
}
/// <inheritdoc/>
@@ -572,8 +431,7 @@ namespace Ryujinx.Cpu.Jit
return Tracking.BeginSmartGranularTracking(address, size, granularity, id);
}
/// <inheritdoc/>
public void SignalMemoryTracking(ulong va, ulong size, bool write, bool precise = false, int? exemptId = null)
private void SignalMemoryTrackingImpl(ulong va, ulong size, bool write, bool guest, bool precise = false, int? exemptId = null)
{
AssertValidAddressAndSize(va, size);
@@ -583,31 +441,45 @@ namespace Ryujinx.Cpu.Jit
return;
}
// We emulate guard pages for software memory access. This makes for an easy transition to
// tracking using host guard pages in future, but also supporting platforms where this is not possible.
// If the memory tracking is coming from the guest, use the tag bits in the page table entry.
// Otherwise, use the managed page flags.
// Write tag includes read protection, since we don't have any read actions that aren't performed before write too.
long tag = (write ? 3L : 1L) << PointerTagBit;
int pages = GetPagesCount(va, (uint)size, out _);
ulong pageStart = va >> PageBits;
for (int page = 0; page < pages; page++)
if (guest)
{
ref long pageRef = ref _pageTable.GetRef<long>(pageStart * PteSize);
// We emulate guard pages for software memory access. This makes for an easy transition to
// tracking using host guard pages in future, but also supporting platforms where this is not possible.
long pte;
// Write tag includes read protection, since we don't have any read actions that aren't performed before write too.
long tag = (write ? 3L : 1L) << PointerTagBit;
pte = Volatile.Read(ref pageRef);
int pages = GetPagesCount(va, (uint)size, out _);
ulong pageStart = va >> PageBits;
if ((pte & tag) != 0)
for (int page = 0; page < pages; page++)
{
Tracking.VirtualMemoryEvent(va, size, write, precise: false, exemptId);
break;
}
ref long pageRef = ref _pageTable.GetRef<long>(pageStart * PteSize);
pageStart++;
long pte = Volatile.Read(ref pageRef);
if ((pte & tag) != 0)
{
Tracking.VirtualMemoryEvent(va, size, write, precise: false, exemptId, true);
break;
}
pageStart++;
}
}
else
{
_pages.SignalMemoryTracking(Tracking, va, size, write, exemptId);
}
}
/// <inheritdoc/>
public override void SignalMemoryTracking(ulong va, ulong size, bool write, bool precise = false, int? exemptId = null)
{
SignalMemoryTrackingImpl(va, size, write, false, precise, exemptId);
}
private ulong PaToPte(ulong pa)
@@ -625,10 +497,16 @@ namespace Ryujinx.Cpu.Jit
/// </summary>
protected override void Destroy() => _pageTable.Dispose();
protected override Span<byte> GetPhysicalAddressSpan(ulong pa, int size)
protected override Memory<byte> GetPhysicalAddressMemory(nuint pa, int size)
=> _backingMemory.GetMemory(pa, size);
protected override Span<byte> GetPhysicalAddressSpan(nuint pa, int size)
=> _backingMemory.GetSpan(pa, size);
protected override ulong TranslateVirtualAddressForRead(ulong va)
protected override nuint TranslateVirtualAddressChecked(ulong va)
=> GetPhysicalAddressInternal(va);
protected override nuint TranslateVirtualAddressUnchecked(ulong va)
=> GetPhysicalAddressInternal(va);
}
}

View File

@@ -3,33 +3,18 @@ using Ryujinx.Memory;
using Ryujinx.Memory.Range;
using Ryujinx.Memory.Tracking;
using System;
using System.Buffers;
using System.Collections.Generic;
using System.Linq;
using System.Runtime.CompilerServices;
using System.Threading;
namespace Ryujinx.Cpu.Jit
{
/// <summary>
/// Represents a CPU memory manager which maps guest virtual memory directly onto a host virtual region.
/// </summary>
public sealed class MemoryManagerHostMapped : VirtualMemoryManagerRefCountedBase<ulong, ulong>, IMemoryManager, IVirtualMemoryManagerTracked, IWritableBlock
public sealed class MemoryManagerHostMapped : VirtualMemoryManagerRefCountedBase, IMemoryManager, IVirtualMemoryManagerTracked
{
public const int PageToPteShift = 5; // 32 pages (2 bits each) in one ulong page table entry.
public const ulong BlockMappedMask = 0x5555555555555555; // First bit of each table entry set.
private enum HostMappedPtBits : ulong
{
Unmapped = 0,
Mapped,
WriteTracked,
ReadWriteTracked,
MappedReplicated = 0x5555555555555555,
WriteTrackedReplicated = 0xaaaaaaaaaaaaaaaa,
ReadWriteTrackedReplicated = ulong.MaxValue,
}
private readonly InvalidAccessHandler _invalidAccessHandler;
private readonly bool _unsafeMode;
@@ -39,10 +24,10 @@ namespace Ryujinx.Cpu.Jit
private readonly MemoryEhMeilleure _memoryEh;
private readonly ulong[] _pageBitmap;
private readonly ManagedPageFlags _pages;
/// <inheritdoc/>
public bool Supports4KBPages => MemoryBlock.GetPageSize() == PageSize;
public bool UsesPrivateAllocations => false;
public int AddressSpaceBits { get; }
@@ -81,7 +66,7 @@ namespace Ryujinx.Cpu.Jit
AddressSpaceBits = asBits;
_pageBitmap = new ulong[1 << (AddressSpaceBits - (PageBits + PageToPteShift))];
_pages = new ManagedPageFlags(AddressSpaceBits);
Tracking = new MemoryTracking(this, (int)MemoryBlock.GetPageSize(), invalidAccessHandler);
_memoryEh = new MemoryEhMeilleure(_addressSpace.Base, _addressSpace.Mirror, Tracking);
@@ -94,7 +79,7 @@ namespace Ryujinx.Cpu.Jit
/// <param name="size">Size of the range in bytes</param>
private void AssertMapped(ulong va, ulong size)
{
if (!ValidateAddressAndSize(va, size) || !IsRangeMappedImpl(va, size))
if (!ValidateAddressAndSize(va, size) || !_pages.IsRangeMapped(va, size))
{
throw new InvalidMemoryRegionException($"Not mapped: va=0x{va:X16}, size=0x{size:X16}");
}
@@ -106,18 +91,12 @@ namespace Ryujinx.Cpu.Jit
AssertValidAddressAndSize(va, size);
_addressSpace.Map(va, pa, size, flags);
AddMapping(va, size);
_pages.AddMapping(va, size);
PtMap(va, pa, size);
Tracking.Map(va, size);
}
/// <inheritdoc/>
public void MapForeign(ulong va, nuint hostPointer, ulong size)
{
throw new NotSupportedException();
}
/// <inheritdoc/>
public void Unmap(ulong va, ulong size)
{
@@ -126,7 +105,7 @@ namespace Ryujinx.Cpu.Jit
UnmapEvent?.Invoke(va, size);
Tracking.Unmap(va, size);
RemoveMapping(va, size);
_pages.RemoveMapping(va, size);
PtUnmap(va, size);
_addressSpace.Unmap(va, size);
}
@@ -154,8 +133,7 @@ namespace Ryujinx.Cpu.Jit
}
}
/// <inheritdoc/>
public T Read<T>(ulong va) where T : unmanaged
public override T Read<T>(ulong va)
{
try
{
@@ -174,14 +152,11 @@ namespace Ryujinx.Cpu.Jit
}
}
/// <inheritdoc/>
public T ReadTracked<T>(ulong va) where T : unmanaged
public override T ReadTracked<T>(ulong va)
{
try
{
SignalMemoryTracking(va, (ulong)Unsafe.SizeOf<T>(), false);
return Read<T>(va);
return base.ReadTracked<T>(va);
}
catch (InvalidMemoryRegionException)
{
@@ -194,7 +169,6 @@ namespace Ryujinx.Cpu.Jit
}
}
/// <inheritdoc/>
public override void Read(ulong va, Span<byte> data)
{
try
@@ -212,9 +186,7 @@ namespace Ryujinx.Cpu.Jit
}
}
/// <inheritdoc/>
public void Write<T>(ulong va, T value) where T : unmanaged
public override void Write<T>(ulong va, T value)
{
try
{
@@ -231,8 +203,7 @@ namespace Ryujinx.Cpu.Jit
}
}
/// <inheritdoc/>
public void Write(ulong va, ReadOnlySpan<byte> data)
public override void Write(ulong va, ReadOnlySpan<byte> data)
{
try
{
@@ -249,8 +220,7 @@ namespace Ryujinx.Cpu.Jit
}
}
/// <inheritdoc/>
public void WriteUntracked(ulong va, ReadOnlySpan<byte> data)
public override void WriteUntracked(ulong va, ReadOnlySpan<byte> data)
{
try
{
@@ -267,8 +237,7 @@ namespace Ryujinx.Cpu.Jit
}
}
/// <inheritdoc/>
public bool WriteWithRedundancyCheck(ulong va, ReadOnlySpan<byte> data)
public override bool WriteWithRedundancyCheck(ulong va, ReadOnlySpan<byte> data)
{
try
{
@@ -295,8 +264,21 @@ namespace Ryujinx.Cpu.Jit
}
}
/// <inheritdoc/>
public ReadOnlySpan<byte> GetSpan(ulong va, int size, bool tracked = false)
public override ReadOnlySequence<byte> GetReadOnlySequence(ulong va, int size, bool tracked = false)
{
if (tracked)
{
SignalMemoryTracking(va, (ulong)size, write: false);
}
else
{
AssertMapped(va, (ulong)size);
}
return new ReadOnlySequence<byte>(_addressSpace.Mirror.GetMemory(va, size));
}
public override ReadOnlySpan<byte> GetSpan(ulong va, int size, bool tracked = false)
{
if (tracked)
{
@@ -310,8 +292,7 @@ namespace Ryujinx.Cpu.Jit
return _addressSpace.Mirror.GetSpan(va, size);
}
/// <inheritdoc/>
public WritableRegion GetWritableRegion(ulong va, int size, bool tracked = false)
public override WritableRegion GetWritableRegion(ulong va, int size, bool tracked = false)
{
if (tracked)
{
@@ -325,7 +306,6 @@ namespace Ryujinx.Cpu.Jit
return _addressSpace.Mirror.GetWritableRegion(va, size);
}
/// <inheritdoc/>
public ref T GetRef<T>(ulong va) where T : unmanaged
{
SignalMemoryTracking(va, (ulong)Unsafe.SizeOf<T>(), true);
@@ -333,26 +313,10 @@ namespace Ryujinx.Cpu.Jit
return ref _addressSpace.Mirror.GetRef<T>(va);
}
/// <inheritdoc/>
[MethodImpl(MethodImplOptions.AggressiveInlining)]
public bool IsMapped(ulong va)
public override bool IsMapped(ulong va)
{
return ValidateAddress(va) && IsMappedImpl(va);
}
[MethodImpl(MethodImplOptions.AggressiveInlining)]
private bool IsMappedImpl(ulong va)
{
ulong page = va >> PageBits;
int bit = (int)((page & 31) << 1);
int pageIndex = (int)(page >> PageToPteShift);
ref ulong pageRef = ref _pageBitmap[pageIndex];
ulong pte = Volatile.Read(ref pageRef);
return ((pte >> bit) & 3) != 0;
return ValidateAddress(va) && _pages.IsMapped(va);
}
/// <inheritdoc/>
@@ -360,58 +324,7 @@ namespace Ryujinx.Cpu.Jit
{
AssertValidAddressAndSize(va, size);
return IsRangeMappedImpl(va, size);
}
[MethodImpl(MethodImplOptions.AggressiveInlining)]
private static void GetPageBlockRange(ulong pageStart, ulong pageEnd, out ulong startMask, out ulong endMask, out int pageIndex, out int pageEndIndex)
{
startMask = ulong.MaxValue << ((int)(pageStart & 31) << 1);
endMask = ulong.MaxValue >> (64 - ((int)(pageEnd & 31) << 1));
pageIndex = (int)(pageStart >> PageToPteShift);
pageEndIndex = (int)((pageEnd - 1) >> PageToPteShift);
}
private bool IsRangeMappedImpl(ulong va, ulong size)
{
int pages = GetPagesCount(va, size, out _);
if (pages == 1)
{
return IsMappedImpl(va);
}
ulong pageStart = va >> PageBits;
ulong pageEnd = pageStart + (ulong)pages;
GetPageBlockRange(pageStart, pageEnd, out ulong startMask, out ulong endMask, out int pageIndex, out int pageEndIndex);
// Check if either bit in each 2 bit page entry is set.
// OR the block with itself shifted down by 1, and check the first bit of each entry.
ulong mask = BlockMappedMask & startMask;
while (pageIndex <= pageEndIndex)
{
if (pageIndex == pageEndIndex)
{
mask &= endMask;
}
ref ulong pageRef = ref _pageBitmap[pageIndex++];
ulong pte = Volatile.Read(ref pageRef);
pte |= pte >> 1;
if ((pte & mask) != mask)
{
return false;
}
mask = BlockMappedMask;
}
return true;
return _pages.IsRangeMapped(va, size);
}
/// <inheritdoc/>
@@ -472,11 +385,10 @@ namespace Ryujinx.Cpu.Jit
return _pageTable.Read(va) + (va & PageMask);
}
/// <inheritdoc/>
/// <remarks>
/// This function also validates that the given range is both valid and mapped, and will throw if it is not.
/// </remarks>
public void SignalMemoryTracking(ulong va, ulong size, bool write, bool precise = false, int? exemptId = null)
public override void SignalMemoryTracking(ulong va, ulong size, bool write, bool precise = false, int? exemptId = null)
{
AssertValidAddressAndSize(va, size);
@@ -486,93 +398,7 @@ namespace Ryujinx.Cpu.Jit
return;
}
// Software table, used for managed memory tracking.
int pages = GetPagesCount(va, size, out _);
ulong pageStart = va >> PageBits;
if (pages == 1)
{
ulong tag = (ulong)(write ? HostMappedPtBits.WriteTracked : HostMappedPtBits.ReadWriteTracked);
int bit = (int)((pageStart & 31) << 1);
int pageIndex = (int)(pageStart >> PageToPteShift);
ref ulong pageRef = ref _pageBitmap[pageIndex];
ulong pte = Volatile.Read(ref pageRef);
ulong state = ((pte >> bit) & 3);
if (state >= tag)
{
Tracking.VirtualMemoryEvent(va, size, write, precise: false, exemptId);
return;
}
else if (state == 0)
{
ThrowInvalidMemoryRegionException($"Not mapped: va=0x{va:X16}, size=0x{size:X16}");
}
}
else
{
ulong pageEnd = pageStart + (ulong)pages;
GetPageBlockRange(pageStart, pageEnd, out ulong startMask, out ulong endMask, out int pageIndex, out int pageEndIndex);
ulong mask = startMask;
ulong anyTrackingTag = (ulong)HostMappedPtBits.WriteTrackedReplicated;
while (pageIndex <= pageEndIndex)
{
if (pageIndex == pageEndIndex)
{
mask &= endMask;
}
ref ulong pageRef = ref _pageBitmap[pageIndex++];
ulong pte = Volatile.Read(ref pageRef);
ulong mappedMask = mask & BlockMappedMask;
ulong mappedPte = pte | (pte >> 1);
if ((mappedPte & mappedMask) != mappedMask)
{
ThrowInvalidMemoryRegionException($"Not mapped: va=0x{va:X16}, size=0x{size:X16}");
}
pte &= mask;
if ((pte & anyTrackingTag) != 0) // Search for any tracking.
{
// Writes trigger any tracking.
// Only trigger tracking from reads if both bits are set on any page.
if (write || (pte & (pte >> 1) & BlockMappedMask) != 0)
{
Tracking.VirtualMemoryEvent(va, size, write, precise: false, exemptId);
break;
}
}
mask = ulong.MaxValue;
}
}
}
/// <summary>
/// Computes the number of pages in a virtual address range.
/// </summary>
/// <param name="va">Virtual address of the range</param>
/// <param name="size">Size of the range</param>
/// <param name="startVa">The virtual address of the beginning of the first page</param>
/// <remarks>This function does not differentiate between allocated and unallocated pages.</remarks>
[MethodImpl(MethodImplOptions.AggressiveInlining)]
private static int GetPagesCount(ulong va, ulong size, out ulong startVa)
{
// WARNING: Always check if ulong does not overflow during the operations.
startVa = va & ~(ulong)PageMask;
ulong vaSpan = (va - startVa + size + PageMask) & ~(ulong)PageMask;
return (int)(vaSpan / PageSize);
_pages.SignalMemoryTracking(Tracking, va, size, write, exemptId);
}
/// <inheritdoc/>
@@ -582,103 +408,28 @@ namespace Ryujinx.Cpu.Jit
}
/// <inheritdoc/>
public void TrackingReprotect(ulong va, ulong size, MemoryPermission protection)
public void TrackingReprotect(ulong va, ulong size, MemoryPermission protection, bool guest)
{
// Protection is inverted on software pages, since the default value is 0.
protection = (~protection) & MemoryPermission.ReadAndWrite;
int pages = GetPagesCount(va, size, out va);
ulong pageStart = va >> PageBits;
if (pages == 1)
if (guest)
{
ulong protTag = protection switch
{
MemoryPermission.None => (ulong)HostMappedPtBits.Mapped,
MemoryPermission.Write => (ulong)HostMappedPtBits.WriteTracked,
_ => (ulong)HostMappedPtBits.ReadWriteTracked,
};
int bit = (int)((pageStart & 31) << 1);
ulong tagMask = 3UL << bit;
ulong invTagMask = ~tagMask;
ulong tag = protTag << bit;
int pageIndex = (int)(pageStart >> PageToPteShift);
ref ulong pageRef = ref _pageBitmap[pageIndex];
ulong pte;
do
{
pte = Volatile.Read(ref pageRef);
}
while ((pte & tagMask) != 0 && Interlocked.CompareExchange(ref pageRef, (pte & invTagMask) | tag, pte) != pte);
_addressSpace.Base.Reprotect(va, size, protection, false);
}
else
{
ulong pageEnd = pageStart + (ulong)pages;
GetPageBlockRange(pageStart, pageEnd, out ulong startMask, out ulong endMask, out int pageIndex, out int pageEndIndex);
ulong mask = startMask;
ulong protTag = protection switch
{
MemoryPermission.None => (ulong)HostMappedPtBits.MappedReplicated,
MemoryPermission.Write => (ulong)HostMappedPtBits.WriteTrackedReplicated,
_ => (ulong)HostMappedPtBits.ReadWriteTrackedReplicated,
};
while (pageIndex <= pageEndIndex)
{
if (pageIndex == pageEndIndex)
{
mask &= endMask;
}
ref ulong pageRef = ref _pageBitmap[pageIndex++];
ulong pte;
ulong mappedMask;
// Change the protection of all 2 bit entries that are mapped.
do
{
pte = Volatile.Read(ref pageRef);
mappedMask = pte | (pte >> 1);
mappedMask |= (mappedMask & BlockMappedMask) << 1;
mappedMask &= mask; // Only update mapped pages within the given range.
}
while (Interlocked.CompareExchange(ref pageRef, (pte & (~mappedMask)) | (protTag & mappedMask), pte) != pte);
mask = ulong.MaxValue;
}
_pages.TrackingReprotect(va, size, protection);
}
protection = protection switch
{
MemoryPermission.None => MemoryPermission.ReadAndWrite,
MemoryPermission.Write => MemoryPermission.Read,
_ => MemoryPermission.None,
};
_addressSpace.Base.Reprotect(va, size, protection, false);
}
/// <inheritdoc/>
public RegionHandle BeginTracking(ulong address, ulong size, int id)
public RegionHandle BeginTracking(ulong address, ulong size, int id, RegionFlags flags = RegionFlags.None)
{
return Tracking.BeginTracking(address, size, id);
return Tracking.BeginTracking(address, size, id, flags);
}
/// <inheritdoc/>
public MultiRegionHandle BeginGranularTracking(ulong address, ulong size, IEnumerable<IRegionHandle> handles, ulong granularity, int id)
public MultiRegionHandle BeginGranularTracking(ulong address, ulong size, IEnumerable<IRegionHandle> handles, ulong granularity, int id, RegionFlags flags = RegionFlags.None)
{
return Tracking.BeginGranularTracking(address, size, handles, granularity, id);
return Tracking.BeginGranularTracking(address, size, handles, granularity, id, flags);
}
/// <inheritdoc/>
@@ -687,86 +438,6 @@ namespace Ryujinx.Cpu.Jit
return Tracking.BeginSmartGranularTracking(address, size, granularity, id);
}
/// <summary>
/// Adds the given address mapping to the page table.
/// </summary>
/// <param name="va">Virtual memory address</param>
/// <param name="size">Size to be mapped</param>
private void AddMapping(ulong va, ulong size)
{
int pages = GetPagesCount(va, size, out _);
ulong pageStart = va >> PageBits;
ulong pageEnd = pageStart + (ulong)pages;
GetPageBlockRange(pageStart, pageEnd, out ulong startMask, out ulong endMask, out int pageIndex, out int pageEndIndex);
ulong mask = startMask;
while (pageIndex <= pageEndIndex)
{
if (pageIndex == pageEndIndex)
{
mask &= endMask;
}
ref ulong pageRef = ref _pageBitmap[pageIndex++];
ulong pte;
ulong mappedMask;
// Map all 2-bit entries that are unmapped.
do
{
pte = Volatile.Read(ref pageRef);
mappedMask = pte | (pte >> 1);
mappedMask |= (mappedMask & BlockMappedMask) << 1;
mappedMask |= ~mask; // Treat everything outside the range as mapped, thus unchanged.
}
while (Interlocked.CompareExchange(ref pageRef, (pte & mappedMask) | (BlockMappedMask & (~mappedMask)), pte) != pte);
mask = ulong.MaxValue;
}
}
/// <summary>
/// Removes the given address mapping from the page table.
/// </summary>
/// <param name="va">Virtual memory address</param>
/// <param name="size">Size to be unmapped</param>
private void RemoveMapping(ulong va, ulong size)
{
int pages = GetPagesCount(va, size, out _);
ulong pageStart = va >> PageBits;
ulong pageEnd = pageStart + (ulong)pages;
GetPageBlockRange(pageStart, pageEnd, out ulong startMask, out ulong endMask, out int pageIndex, out int pageEndIndex);
startMask = ~startMask;
endMask = ~endMask;
ulong mask = startMask;
while (pageIndex <= pageEndIndex)
{
if (pageIndex == pageEndIndex)
{
mask |= endMask;
}
ref ulong pageRef = ref _pageBitmap[pageIndex++];
ulong pte;
do
{
pte = Volatile.Read(ref pageRef);
}
while (Interlocked.CompareExchange(ref pageRef, pte & mask, pte) != pte);
mask = 0;
}
}
/// <summary>
/// Disposes of resources used by the memory manager.
/// </summary>
@@ -776,10 +447,16 @@ namespace Ryujinx.Cpu.Jit
_memoryEh.Dispose();
}
protected override Span<byte> GetPhysicalAddressSpan(ulong pa, int size)
protected override Memory<byte> GetPhysicalAddressMemory(nuint pa, int size)
=> _addressSpace.Mirror.GetMemory(pa, size);
protected override Span<byte> GetPhysicalAddressSpan(nuint pa, int size)
=> _addressSpace.Mirror.GetSpan(pa, size);
protected override ulong TranslateVirtualAddressForRead(ulong va)
=> va;
protected override nuint TranslateVirtualAddressChecked(ulong va)
=> (nuint)GetPhysicalAddressChecked(va);
protected override nuint TranslateVirtualAddressUnchecked(ulong va)
=> (nuint)GetPhysicalAddressInternal(va);
}
}

View File

@@ -0,0 +1,568 @@
using ARMeilleure.Memory;
using Ryujinx.Cpu.Jit.HostTracked;
using Ryujinx.Cpu.Signal;
using Ryujinx.Memory;
using Ryujinx.Memory.Range;
using Ryujinx.Memory.Tracking;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Runtime.CompilerServices;
namespace Ryujinx.Cpu.Jit
{
/// <summary>
/// Represents a CPU memory manager which maps guest virtual memory directly onto a host virtual region.
/// </summary>
public sealed class MemoryManagerHostTracked : VirtualMemoryManagerRefCountedBase, IMemoryManager, IVirtualMemoryManagerTracked
{
private readonly InvalidAccessHandler _invalidAccessHandler;
private readonly bool _unsafeMode;
private readonly MemoryBlock _backingMemory;
public int AddressSpaceBits { get; }
public MemoryTracking Tracking { get; }
private readonly NativePageTable _nativePageTable;
private readonly AddressSpacePartitioned _addressSpace;
private readonly ManagedPageFlags _pages;
protected override ulong AddressSpaceSize { get; }
/// <inheritdoc/>
public bool UsesPrivateAllocations => true;
public IntPtr PageTablePointer => _nativePageTable.PageTablePointer;
public MemoryManagerType Type => _unsafeMode ? MemoryManagerType.HostTrackedUnsafe : MemoryManagerType.HostTracked;
public event Action<ulong, ulong> UnmapEvent;
/// <summary>
/// Creates a new instance of the host tracked memory manager.
/// </summary>
/// <param name="backingMemory">Physical backing memory where virtual memory will be mapped to</param>
/// <param name="addressSpaceSize">Size of the address space</param>
/// <param name="unsafeMode">True if unmanaged access should not be masked (unsafe), false otherwise.</param>
/// <param name="invalidAccessHandler">Optional function to handle invalid memory accesses</param>
public MemoryManagerHostTracked(MemoryBlock backingMemory, ulong addressSpaceSize, bool unsafeMode, InvalidAccessHandler invalidAccessHandler)
{
bool useProtectionMirrors = MemoryBlock.GetPageSize() > PageSize;
Tracking = new MemoryTracking(this, PageSize, invalidAccessHandler, useProtectionMirrors);
_backingMemory = backingMemory;
_invalidAccessHandler = invalidAccessHandler;
_unsafeMode = unsafeMode;
AddressSpaceSize = addressSpaceSize;
ulong asSize = PageSize;
int asBits = PageBits;
while (asSize < AddressSpaceSize)
{
asSize <<= 1;
asBits++;
}
AddressSpaceBits = asBits;
if (useProtectionMirrors && !NativeSignalHandler.SupportsFaultAddressPatching())
{
// Currently we require being able to change the fault address to something else
// in order to "emulate" 4KB granularity protection on systems with larger page size.
throw new PlatformNotSupportedException();
}
_pages = new ManagedPageFlags(asBits);
_nativePageTable = new(asSize);
_addressSpace = new(Tracking, backingMemory, _nativePageTable, useProtectionMirrors);
}
/// <inheritdoc/>
public void Map(ulong va, ulong pa, ulong size, MemoryMapFlags flags)
{
AssertValidAddressAndSize(va, size);
if (flags.HasFlag(MemoryMapFlags.Private))
{
_addressSpace.Map(va, pa, size);
}
_pages.AddMapping(va, size);
_nativePageTable.Map(va, pa, size, _addressSpace, _backingMemory, flags.HasFlag(MemoryMapFlags.Private));
Tracking.Map(va, size);
}
/// <inheritdoc/>
public void Unmap(ulong va, ulong size)
{
AssertValidAddressAndSize(va, size);
_addressSpace.Unmap(va, size);
UnmapEvent?.Invoke(va, size);
Tracking.Unmap(va, size);
_pages.RemoveMapping(va, size);
_nativePageTable.Unmap(va, size);
}
public override T ReadTracked<T>(ulong va)
{
try
{
return base.ReadTracked<T>(va);
}
catch (InvalidMemoryRegionException)
{
if (_invalidAccessHandler == null || !_invalidAccessHandler(va))
{
throw;
}
return default;
}
}
public override void Read(ulong va, Span<byte> data)
{
if (data.Length == 0)
{
return;
}
try
{
AssertValidAddressAndSize(va, (ulong)data.Length);
ulong endVa = va + (ulong)data.Length;
int offset = 0;
while (va < endVa)
{
(MemoryBlock memory, ulong rangeOffset, ulong copySize) = GetMemoryOffsetAndSize(va, (ulong)(data.Length - offset));
memory.GetSpan(rangeOffset, (int)copySize).CopyTo(data.Slice(offset, (int)copySize));
va += copySize;
offset += (int)copySize;
}
}
catch (InvalidMemoryRegionException)
{
if (_invalidAccessHandler == null || !_invalidAccessHandler(va))
{
throw;
}
}
}
public override bool WriteWithRedundancyCheck(ulong va, ReadOnlySpan<byte> data)
{
if (data.Length == 0)
{
return false;
}
SignalMemoryTracking(va, (ulong)data.Length, false);
if (TryGetVirtualContiguous(va, data.Length, out MemoryBlock memoryBlock, out ulong offset))
{
var target = memoryBlock.GetSpan(offset, data.Length);
bool changed = !data.SequenceEqual(target);
if (changed)
{
data.CopyTo(target);
}
return changed;
}
else
{
WriteImpl(va, data);
return true;
}
}
public override ReadOnlySpan<byte> GetSpan(ulong va, int size, bool tracked = false)
{
if (size == 0)
{
return ReadOnlySpan<byte>.Empty;
}
if (tracked)
{
SignalMemoryTracking(va, (ulong)size, false);
}
if (TryGetVirtualContiguous(va, size, out MemoryBlock memoryBlock, out ulong offset))
{
return memoryBlock.GetSpan(offset, size);
}
else
{
Span<byte> data = new byte[size];
Read(va, data);
return data;
}
}
public override WritableRegion GetWritableRegion(ulong va, int size, bool tracked = false)
{
if (size == 0)
{
return new WritableRegion(null, va, Memory<byte>.Empty);
}
if (tracked)
{
SignalMemoryTracking(va, (ulong)size, true);
}
if (TryGetVirtualContiguous(va, size, out MemoryBlock memoryBlock, out ulong offset))
{
return new WritableRegion(null, va, memoryBlock.GetMemory(offset, size));
}
else
{
Memory<byte> memory = new byte[size];
Read(va, memory.Span);
return new WritableRegion(this, va, memory);
}
}
public ref T GetRef<T>(ulong va) where T : unmanaged
{
if (!TryGetVirtualContiguous(va, Unsafe.SizeOf<T>(), out MemoryBlock memory, out ulong offset))
{
ThrowMemoryNotContiguous();
}
SignalMemoryTracking(va, (ulong)Unsafe.SizeOf<T>(), true);
return ref memory.GetRef<T>(offset);
}
[MethodImpl(MethodImplOptions.AggressiveInlining)]
public override bool IsMapped(ulong va)
{
return ValidateAddress(va) && _pages.IsMapped(va);
}
public bool IsRangeMapped(ulong va, ulong size)
{
AssertValidAddressAndSize(va, size);
return _pages.IsRangeMapped(va, size);
}
private bool TryGetVirtualContiguous(ulong va, int size, out MemoryBlock memory, out ulong offset)
{
if (_addressSpace.HasAnyPrivateAllocation(va, (ulong)size, out PrivateRange range))
{
// If we have a private allocation overlapping the range,
// then the access is only considered contiguous if it covers the entire range.
if (range.Memory != null)
{
memory = range.Memory;
offset = range.Offset;
return true;
}
memory = null;
offset = 0;
return false;
}
memory = _backingMemory;
offset = GetPhysicalAddressInternal(va);
return IsPhysicalContiguous(va, size);
}
[MethodImpl(MethodImplOptions.AggressiveInlining)]
private bool IsPhysicalContiguous(ulong va, int size)
{
if (!ValidateAddress(va) || !ValidateAddressAndSize(va, (ulong)size))
{
return false;
}
int pages = GetPagesCount(va, (uint)size, out va);
for (int page = 0; page < pages - 1; page++)
{
if (!ValidateAddress(va + PageSize))
{
return false;
}
if (GetPhysicalAddressInternal(va) + PageSize != GetPhysicalAddressInternal(va + PageSize))
{
return false;
}
va += PageSize;
}
return true;
}
[MethodImpl(MethodImplOptions.AggressiveInlining)]
private ulong GetContiguousSize(ulong va, ulong size)
{
ulong contiguousSize = PageSize - (va & PageMask);
if (!ValidateAddress(va) || !ValidateAddressAndSize(va, size))
{
return contiguousSize;
}
int pages = GetPagesCount(va, size, out va);
for (int page = 0; page < pages - 1; page++)
{
if (!ValidateAddress(va + PageSize))
{
return contiguousSize;
}
if (GetPhysicalAddressInternal(va) + PageSize != GetPhysicalAddressInternal(va + PageSize))
{
return contiguousSize;
}
va += PageSize;
contiguousSize += PageSize;
}
return Math.Min(contiguousSize, size);
}
private (MemoryBlock, ulong, ulong) GetMemoryOffsetAndSize(ulong va, ulong size)
{
PrivateRange privateRange = _addressSpace.GetFirstPrivateAllocation(va, size, out ulong nextVa);
if (privateRange.Memory != null)
{
return (privateRange.Memory, privateRange.Offset, privateRange.Size);
}
ulong physSize = GetContiguousSize(va, Math.Min(size, nextVa - va));
return (_backingMemory, GetPhysicalAddressChecked(va), physSize);
}
public IEnumerable<HostMemoryRange> GetHostRegions(ulong va, ulong size)
{
if (!ValidateAddressAndSize(va, size))
{
return null;
}
var regions = new List<HostMemoryRange>();
ulong endVa = va + size;
try
{
while (va < endVa)
{
(MemoryBlock memory, ulong rangeOffset, ulong rangeSize) = GetMemoryOffsetAndSize(va, endVa - va);
regions.Add(new((UIntPtr)memory.GetPointer(rangeOffset, rangeSize), rangeSize));
va += rangeSize;
}
}
catch (InvalidMemoryRegionException)
{
return null;
}
return regions;
}
public IEnumerable<MemoryRange> GetPhysicalRegions(ulong va, ulong size)
{
if (size == 0)
{
return Enumerable.Empty<MemoryRange>();
}
return GetPhysicalRegionsImpl(va, size);
}
private List<MemoryRange> GetPhysicalRegionsImpl(ulong va, ulong size)
{
if (!ValidateAddress(va) || !ValidateAddressAndSize(va, size))
{
return null;
}
int pages = GetPagesCount(va, (uint)size, out va);
var regions = new List<MemoryRange>();
ulong regionStart = GetPhysicalAddressInternal(va);
ulong regionSize = PageSize;
for (int page = 0; page < pages - 1; page++)
{
if (!ValidateAddress(va + PageSize))
{
return null;
}
ulong newPa = GetPhysicalAddressInternal(va + PageSize);
if (GetPhysicalAddressInternal(va) + PageSize != newPa)
{
regions.Add(new MemoryRange(regionStart, regionSize));
regionStart = newPa;
regionSize = 0;
}
va += PageSize;
regionSize += PageSize;
}
regions.Add(new MemoryRange(regionStart, regionSize));
return regions;
}
/// <inheritdoc/>
/// <remarks>
/// This function also validates that the given range is both valid and mapped, and will throw if it is not.
/// </remarks>
public override void SignalMemoryTracking(ulong va, ulong size, bool write, bool precise = false, int? exemptId = null)
{
AssertValidAddressAndSize(va, size);
if (precise)
{
Tracking.VirtualMemoryEvent(va, size, write, precise: true, exemptId);
return;
}
// Software table, used for managed memory tracking.
_pages.SignalMemoryTracking(Tracking, va, size, write, exemptId);
}
public RegionHandle BeginTracking(ulong address, ulong size, int id, RegionFlags flags = RegionFlags.None)
{
return Tracking.BeginTracking(address, size, id, flags);
}
public MultiRegionHandle BeginGranularTracking(ulong address, ulong size, IEnumerable<IRegionHandle> handles, ulong granularity, int id, RegionFlags flags = RegionFlags.None)
{
return Tracking.BeginGranularTracking(address, size, handles, granularity, id, flags);
}
public SmartMultiRegionHandle BeginSmartGranularTracking(ulong address, ulong size, ulong granularity, int id)
{
return Tracking.BeginSmartGranularTracking(address, size, granularity, id);
}
private ulong GetPhysicalAddressChecked(ulong va)
{
if (!IsMapped(va))
{
ThrowInvalidMemoryRegionException($"Not mapped: va=0x{va:X16}");
}
return GetPhysicalAddressInternal(va);
}
private ulong GetPhysicalAddressInternal(ulong va)
{
return _nativePageTable.GetPhysicalAddress(va);
}
/// <inheritdoc/>
public void Reprotect(ulong va, ulong size, MemoryPermission protection)
{
// TODO
}
/// <inheritdoc/>
public void TrackingReprotect(ulong va, ulong size, MemoryPermission protection, bool guest)
{
if (guest)
{
_addressSpace.Reprotect(va, size, protection);
}
else
{
_pages.TrackingReprotect(va, size, protection);
}
}
/// <summary>
/// Disposes of resources used by the memory manager.
/// </summary>
protected override void Destroy()
{
_addressSpace.Dispose();
_nativePageTable.Dispose();
}
protected override Memory<byte> GetPhysicalAddressMemory(nuint pa, int size)
=> _backingMemory.GetMemory(pa, size);
protected override Span<byte> GetPhysicalAddressSpan(nuint pa, int size)
=> _backingMemory.GetSpan(pa, size);
protected override void WriteImpl(ulong va, ReadOnlySpan<byte> data)
{
try
{
AssertValidAddressAndSize(va, (ulong)data.Length);
ulong endVa = va + (ulong)data.Length;
int offset = 0;
while (va < endVa)
{
(MemoryBlock memory, ulong rangeOffset, ulong copySize) = GetMemoryOffsetAndSize(va, (ulong)(data.Length - offset));
data.Slice(offset, (int)copySize).CopyTo(memory.GetSpan(rangeOffset, (int)copySize));
va += copySize;
offset += (int)copySize;
}
}
catch (InvalidMemoryRegionException)
{
if (_invalidAccessHandler == null || !_invalidAccessHandler(va))
{
throw;
}
}
}
protected override nuint TranslateVirtualAddressChecked(ulong va)
=> (nuint)GetPhysicalAddressChecked(va);
protected override nuint TranslateVirtualAddressUnchecked(ulong va)
=> (nuint)GetPhysicalAddressInternal(va);
}
}

View File

@@ -1126,11 +1126,23 @@ namespace Ryujinx.Cpu.LightningJit.Arm32.Target.Arm64
Operand destination64 = new(destination.Kind, OperandType.I64, destination.Value);
Operand basePointer = new(regAlloc.FixedPageTableRegister, RegisterType.Integer, OperandType.I64);
if (mmType == MemoryManagerType.HostMapped || mmType == MemoryManagerType.HostMappedUnsafe)
{
// We don't need to mask the address for the safe mode, since it is already naturally limited to 32-bit
// and can never reach out of the guest address space.
// We don't need to mask the address for the safe mode, since it is already naturally limited to 32-bit
// and can never reach out of the guest address space.
if (mmType.IsHostTracked())
{
int tempRegister = regAlloc.AllocateTempGprRegister();
Operand pte = new(tempRegister, RegisterType.Integer, OperandType.I64);
asm.Lsr(pte, guestAddress, new Operand(OperandKind.Constant, OperandType.I32, 12));
asm.LdrRr(pte, basePointer, pte, ArmExtensionType.Uxtx, true);
asm.Add(destination64, pte, guestAddress);
regAlloc.FreeTempGprRegister(tempRegister);
}
else if (mmType.IsHostMapped())
{
asm.Add(destination64, basePointer, guestAddress);
}
else

View File

@@ -1131,5 +1131,37 @@ namespace Ryujinx.Cpu.LightningJit.Arm64
return false;
}
public static bool IsPartialRegisterUpdateMemory(this InstName name)
{
switch (name)
{
case InstName.Ld1AdvsimdSnglAsNoPostIndex:
case InstName.Ld1AdvsimdSnglAsPostIndex:
case InstName.Ld2AdvsimdSnglAsNoPostIndex:
case InstName.Ld2AdvsimdSnglAsPostIndex:
case InstName.Ld3AdvsimdSnglAsNoPostIndex:
case InstName.Ld3AdvsimdSnglAsPostIndex:
case InstName.Ld4AdvsimdSnglAsNoPostIndex:
case InstName.Ld4AdvsimdSnglAsPostIndex:
return true;
}
return false;
}
public static bool IsPrefetchMemory(this InstName name)
{
switch (name)
{
case InstName.PrfmImm:
case InstName.PrfmLit:
case InstName.PrfmReg:
case InstName.Prfum:
return true;
}
return false;
}
}
}

View File

@@ -1,15 +1,12 @@
using ARMeilleure.Memory;
using Ryujinx.Cpu.LightningJit.CodeGen.Arm64;
using System;
using System.Diagnostics;
using System.Numerics;
namespace Ryujinx.Cpu.LightningJit.Arm64
{
class RegisterAllocator
{
public const int MaxTemps = 1;
public const int MaxTempsInclFixed = MaxTemps + 2;
private uint _gprMask;
private readonly uint _fpSimdMask;
private readonly uint _pStateMask;
@@ -25,7 +22,7 @@ namespace Ryujinx.Cpu.LightningJit.Arm64
public uint AllFpSimdMask => _fpSimdMask;
public uint AllPStateMask => _pStateMask;
public RegisterAllocator(uint gprMask, uint fpSimdMask, uint pStateMask, bool hasHostCall)
public RegisterAllocator(MemoryManagerType mmType, uint gprMask, uint fpSimdMask, uint pStateMask, bool hasHostCall)
{
_gprMask = gprMask;
_fpSimdMask = fpSimdMask;
@@ -56,7 +53,7 @@ namespace Ryujinx.Cpu.LightningJit.Arm64
BuildRegisterMap(_registerMap);
Span<int> tempRegisters = stackalloc int[MaxTemps];
Span<int> tempRegisters = stackalloc int[CalculateMaxTemps(mmType)];
for (int index = 0; index < tempRegisters.Length; index++)
{
@@ -150,5 +147,15 @@ namespace Ryujinx.Cpu.LightningJit.Arm64
{
mask &= ~(1u << index);
}
public static int CalculateMaxTemps(MemoryManagerType mmType)
{
return mmType.IsHostMapped() ? 1 : 2;
}
public static int CalculateMaxTempsInclFixed(MemoryManagerType mmType)
{
return CalculateMaxTemps(mmType) + 2;
}
}
}

View File

@@ -247,7 +247,7 @@ namespace Ryujinx.Cpu.LightningJit.Arm64
}
}
if (!flags.HasFlag(InstFlags.ReadRt))
if (!flags.HasFlag(InstFlags.ReadRt) || name.IsPartialRegisterUpdateMemory())
{
if (flags.HasFlag(InstFlags.Rt))
{
@@ -281,7 +281,7 @@ namespace Ryujinx.Cpu.LightningJit.Arm64
gprMask |= MaskFromIndex(ExtractRd(flags, encoding));
}
if (!flags.HasFlag(InstFlags.ReadRt))
if (!flags.HasFlag(InstFlags.ReadRt) || name.IsPartialRegisterUpdateMemory())
{
if (flags.HasFlag(InstFlags.Rt))
{

View File

@@ -316,7 +316,7 @@ namespace Ryujinx.Cpu.LightningJit.Arm64.Target.Arm64
uint pStateUseMask = multiBlock.GlobalUseMask.PStateMask;
CodeWriter writer = new();
RegisterAllocator regAlloc = new(gprUseMask, fpSimdUseMask, pStateUseMask, multiBlock.HasHostCall);
RegisterAllocator regAlloc = new(memoryManager.Type, gprUseMask, fpSimdUseMask, pStateUseMask, multiBlock.HasHostCall);
RegisterSaveRestore rsr = new(
regAlloc.AllGprMask & AbiConstants.GprCalleeSavedRegsMask,
regAlloc.AllFpSimdMask & AbiConstants.FpSimdCalleeSavedRegsMask,

View File

@@ -274,7 +274,8 @@ namespace Ryujinx.Cpu.LightningJit.Arm64.Target.Arm64
uint tempGprUseMask = gprUseMask | instGprReadMask | instGprWriteMask;
if (CalculateAvailableTemps(tempGprUseMask) < CalculateRequiredGprTemps(tempGprUseMask) || totalInsts++ >= MaxInstructionsPerFunction)
if (CalculateAvailableTemps(tempGprUseMask) < CalculateRequiredGprTemps(memoryManager.Type, tempGprUseMask) ||
totalInsts++ >= MaxInstructionsPerFunction)
{
isTruncated = true;
address -= 4UL;
@@ -378,9 +379,9 @@ namespace Ryujinx.Cpu.LightningJit.Arm64.Target.Arm64
return false;
}
private static int CalculateRequiredGprTemps(uint gprUseMask)
private static int CalculateRequiredGprTemps(MemoryManagerType mmType, uint gprUseMask)
{
return BitOperations.PopCount(gprUseMask & RegisterUtils.ReservedRegsMask) + RegisterAllocator.MaxTempsInclFixed;
return BitOperations.PopCount(gprUseMask & RegisterUtils.ReservedRegsMask) + RegisterAllocator.CalculateMaxTempsInclFixed(mmType);
}
private static int CalculateAvailableTemps(uint gprUseMask)

View File

@@ -55,6 +55,16 @@ namespace Ryujinx.Cpu.LightningJit.Arm64.Target.Arm64
ulong pc,
uint encoding)
{
if (name.IsPrefetchMemory() && mmType == MemoryManagerType.HostTrackedUnsafe)
{
// Prefetch to invalid addresses do not cause faults, so for memory manager
// types where we need to access the page table before doing the prefetch,
// we should make sure we won't try to access an out of bounds page table region.
// To do this, we force the masked memory manager variant to be used.
mmType = MemoryManagerType.HostTracked;
}
switch (addressForm)
{
case AddressForm.OffsetReg:
@@ -511,18 +521,48 @@ namespace Ryujinx.Cpu.LightningJit.Arm64.Target.Arm64
WriteAddressTranslation(asBits, mmType, regAlloc, ref asm, destination, guestAddress);
}
private static void WriteAddressTranslation(int asBits, MemoryManagerType mmType, RegisterAllocator regAlloc, ref Assembler asm, Operand destination, ulong guestAddress)
private static void WriteAddressTranslation(
int asBits,
MemoryManagerType mmType,
RegisterAllocator regAlloc,
ref Assembler asm,
Operand destination,
ulong guestAddress)
{
asm.Mov(destination, guestAddress);
WriteAddressTranslation(asBits, mmType, regAlloc, ref asm, destination, destination);
}
private static void WriteAddressTranslation(int asBits, MemoryManagerType mmType, RegisterAllocator regAlloc, ref Assembler asm, Operand destination, Operand guestAddress)
private static void WriteAddressTranslation(
int asBits,
MemoryManagerType mmType,
RegisterAllocator regAlloc,
ref Assembler asm,
Operand destination,
Operand guestAddress)
{
Operand basePointer = new(regAlloc.FixedPageTableRegister, RegisterType.Integer, OperandType.I64);
if (mmType == MemoryManagerType.HostMapped || mmType == MemoryManagerType.HostMappedUnsafe)
if (mmType.IsHostTracked())
{
int tempRegister = regAlloc.AllocateTempGprRegister();
Operand pte = new(tempRegister, RegisterType.Integer, OperandType.I64);
asm.Lsr(pte, guestAddress, new Operand(OperandKind.Constant, OperandType.I32, 12));
if (mmType == MemoryManagerType.HostTracked)
{
asm.And(pte, pte, new Operand(OperandKind.Constant, OperandType.I64, ulong.MaxValue >> (64 - (asBits - 12))));
}
asm.LdrRr(pte, basePointer, pte, ArmExtensionType.Uxtx, true);
asm.Add(destination, pte, guestAddress);
regAlloc.FreeTempGprRegister(tempRegister);
}
else if (mmType.IsHostMapped())
{
if (mmType == MemoryManagerType.HostMapped)
{

View File

@@ -68,9 +68,9 @@ namespace Ryujinx.Cpu.LightningJit
FunctionTable.Fill = (ulong)Stubs.SlowDispatchStub;
if (memory.Type.IsHostMapped())
if (memory.Type.IsHostMappedOrTracked())
{
NativeSignalHandler.InitializeSignalHandler(MemoryBlock.GetPageSize());
NativeSignalHandler.InitializeSignalHandler();
}
}

View File

@@ -0,0 +1,389 @@
using Ryujinx.Memory;
using Ryujinx.Memory.Tracking;
using System;
using System.Runtime.CompilerServices;
using System.Threading;
namespace Ryujinx.Cpu
{
/// <summary>
/// A page bitmap that keeps track of mapped state and tracking protection
/// for managed memory accesses (not using host page protection).
/// </summary>
internal readonly struct ManagedPageFlags
{
public const int PageBits = 12;
public const int PageSize = 1 << PageBits;
public const int PageMask = PageSize - 1;
private readonly ulong[] _pageBitmap;
public const int PageToPteShift = 5; // 32 pages (2 bits each) in one ulong page table entry.
public const ulong BlockMappedMask = 0x5555555555555555; // First bit of each table entry set.
private enum ManagedPtBits : ulong
{
Unmapped = 0,
Mapped,
WriteTracked,
ReadWriteTracked,
MappedReplicated = 0x5555555555555555,
WriteTrackedReplicated = 0xaaaaaaaaaaaaaaaa,
ReadWriteTrackedReplicated = ulong.MaxValue,
}
public ManagedPageFlags(int addressSpaceBits)
{
int bits = Math.Max(0, addressSpaceBits - (PageBits + PageToPteShift));
_pageBitmap = new ulong[1 << bits];
}
/// <summary>
/// Computes the number of pages in a virtual address range.
/// </summary>
/// <param name="va">Virtual address of the range</param>
/// <param name="size">Size of the range</param>
/// <param name="startVa">The virtual address of the beginning of the first page</param>
/// <remarks>This function does not differentiate between allocated and unallocated pages.</remarks>
[MethodImpl(MethodImplOptions.AggressiveInlining)]
private static int GetPagesCount(ulong va, ulong size, out ulong startVa)
{
// WARNING: Always check if ulong does not overflow during the operations.
startVa = va & ~(ulong)PageMask;
ulong vaSpan = (va - startVa + size + PageMask) & ~(ulong)PageMask;
return (int)(vaSpan / PageSize);
}
/// <summary>
/// Checks if the page at a given CPU virtual address is mapped.
/// </summary>
/// <param name="va">Virtual address to check</param>
/// <returns>True if the address is mapped, false otherwise</returns>
[MethodImpl(MethodImplOptions.AggressiveInlining)]
public readonly bool IsMapped(ulong va)
{
ulong page = va >> PageBits;
int bit = (int)((page & 31) << 1);
int pageIndex = (int)(page >> PageToPteShift);
ref ulong pageRef = ref _pageBitmap[pageIndex];
ulong pte = Volatile.Read(ref pageRef);
return ((pte >> bit) & 3) != 0;
}
[MethodImpl(MethodImplOptions.AggressiveInlining)]
private static void GetPageBlockRange(ulong pageStart, ulong pageEnd, out ulong startMask, out ulong endMask, out int pageIndex, out int pageEndIndex)
{
startMask = ulong.MaxValue << ((int)(pageStart & 31) << 1);
endMask = ulong.MaxValue >> (64 - ((int)(pageEnd & 31) << 1));
pageIndex = (int)(pageStart >> PageToPteShift);
pageEndIndex = (int)((pageEnd - 1) >> PageToPteShift);
}
/// <summary>
/// Checks if a memory range is mapped.
/// </summary>
/// <param name="va">Virtual address of the range</param>
/// <param name="size">Size of the range in bytes</param>
/// <returns>True if the entire range is mapped, false otherwise</returns>
public readonly bool IsRangeMapped(ulong va, ulong size)
{
int pages = GetPagesCount(va, size, out _);
if (pages == 1)
{
return IsMapped(va);
}
ulong pageStart = va >> PageBits;
ulong pageEnd = pageStart + (ulong)pages;
GetPageBlockRange(pageStart, pageEnd, out ulong startMask, out ulong endMask, out int pageIndex, out int pageEndIndex);
// Check if either bit in each 2 bit page entry is set.
// OR the block with itself shifted down by 1, and check the first bit of each entry.
ulong mask = BlockMappedMask & startMask;
while (pageIndex <= pageEndIndex)
{
if (pageIndex == pageEndIndex)
{
mask &= endMask;
}
ref ulong pageRef = ref _pageBitmap[pageIndex++];
ulong pte = Volatile.Read(ref pageRef);
pte |= pte >> 1;
if ((pte & mask) != mask)
{
return false;
}
mask = BlockMappedMask;
}
return true;
}
/// <summary>
/// Reprotect a region of virtual memory for tracking.
/// </summary>
/// <param name="va">Virtual address base</param>
/// <param name="size">Size of the region to protect</param>
/// <param name="protection">Memory protection to set</param>
public readonly void TrackingReprotect(ulong va, ulong size, MemoryPermission protection)
{
// Protection is inverted on software pages, since the default value is 0.
protection = (~protection) & MemoryPermission.ReadAndWrite;
int pages = GetPagesCount(va, size, out va);
ulong pageStart = va >> PageBits;
if (pages == 1)
{
ulong protTag = protection switch
{
MemoryPermission.None => (ulong)ManagedPtBits.Mapped,
MemoryPermission.Write => (ulong)ManagedPtBits.WriteTracked,
_ => (ulong)ManagedPtBits.ReadWriteTracked,
};
int bit = (int)((pageStart & 31) << 1);
ulong tagMask = 3UL << bit;
ulong invTagMask = ~tagMask;
ulong tag = protTag << bit;
int pageIndex = (int)(pageStart >> PageToPteShift);
ref ulong pageRef = ref _pageBitmap[pageIndex];
ulong pte;
do
{
pte = Volatile.Read(ref pageRef);
}
while ((pte & tagMask) != 0 && Interlocked.CompareExchange(ref pageRef, (pte & invTagMask) | tag, pte) != pte);
}
else
{
ulong pageEnd = pageStart + (ulong)pages;
GetPageBlockRange(pageStart, pageEnd, out ulong startMask, out ulong endMask, out int pageIndex, out int pageEndIndex);
ulong mask = startMask;
ulong protTag = protection switch
{
MemoryPermission.None => (ulong)ManagedPtBits.MappedReplicated,
MemoryPermission.Write => (ulong)ManagedPtBits.WriteTrackedReplicated,
_ => (ulong)ManagedPtBits.ReadWriteTrackedReplicated,
};
while (pageIndex <= pageEndIndex)
{
if (pageIndex == pageEndIndex)
{
mask &= endMask;
}
ref ulong pageRef = ref _pageBitmap[pageIndex++];
ulong pte;
ulong mappedMask;
// Change the protection of all 2 bit entries that are mapped.
do
{
pte = Volatile.Read(ref pageRef);
mappedMask = pte | (pte >> 1);
mappedMask |= (mappedMask & BlockMappedMask) << 1;
mappedMask &= mask; // Only update mapped pages within the given range.
}
while (Interlocked.CompareExchange(ref pageRef, (pte & (~mappedMask)) | (protTag & mappedMask), pte) != pte);
mask = ulong.MaxValue;
}
}
}
/// <summary>
/// Alerts the memory tracking that a given region has been read from or written to.
/// This should be called before read/write is performed.
/// </summary>
/// <param name="tracking">Memory tracking structure to call when pages are protected</param>
/// <param name="va">Virtual address of the region</param>
/// <param name="size">Size of the region</param>
/// <param name="write">True if the region was written, false if read</param>
/// <param name="exemptId">Optional ID of the handles that should not be signalled</param>
/// <remarks>
/// This function also validates that the given range is both valid and mapped, and will throw if it is not.
/// </remarks>
[MethodImpl(MethodImplOptions.AggressiveInlining)]
public readonly void SignalMemoryTracking(MemoryTracking tracking, ulong va, ulong size, bool write, int? exemptId = null)
{
// Software table, used for managed memory tracking.
int pages = GetPagesCount(va, size, out _);
ulong pageStart = va >> PageBits;
if (pages == 1)
{
ulong tag = (ulong)(write ? ManagedPtBits.WriteTracked : ManagedPtBits.ReadWriteTracked);
int bit = (int)((pageStart & 31) << 1);
int pageIndex = (int)(pageStart >> PageToPteShift);
ref ulong pageRef = ref _pageBitmap[pageIndex];
ulong pte = Volatile.Read(ref pageRef);
ulong state = ((pte >> bit) & 3);
if (state >= tag)
{
tracking.VirtualMemoryEvent(va, size, write, precise: false, exemptId);
return;
}
else if (state == 0)
{
ThrowInvalidMemoryRegionException($"Not mapped: va=0x{va:X16}, size=0x{size:X16}");
}
}
else
{
ulong pageEnd = pageStart + (ulong)pages;
GetPageBlockRange(pageStart, pageEnd, out ulong startMask, out ulong endMask, out int pageIndex, out int pageEndIndex);
ulong mask = startMask;
ulong anyTrackingTag = (ulong)ManagedPtBits.WriteTrackedReplicated;
while (pageIndex <= pageEndIndex)
{
if (pageIndex == pageEndIndex)
{
mask &= endMask;
}
ref ulong pageRef = ref _pageBitmap[pageIndex++];
ulong pte = Volatile.Read(ref pageRef);
ulong mappedMask = mask & BlockMappedMask;
ulong mappedPte = pte | (pte >> 1);
if ((mappedPte & mappedMask) != mappedMask)
{
ThrowInvalidMemoryRegionException($"Not mapped: va=0x{va:X16}, size=0x{size:X16}");
}
pte &= mask;
if ((pte & anyTrackingTag) != 0) // Search for any tracking.
{
// Writes trigger any tracking.
// Only trigger tracking from reads if both bits are set on any page.
if (write || (pte & (pte >> 1) & BlockMappedMask) != 0)
{
tracking.VirtualMemoryEvent(va, size, write, precise: false, exemptId);
break;
}
}
mask = ulong.MaxValue;
}
}
}
/// <summary>
/// Adds the given address mapping to the page table.
/// </summary>
/// <param name="va">Virtual memory address</param>
/// <param name="size">Size to be mapped</param>
public readonly void AddMapping(ulong va, ulong size)
{
int pages = GetPagesCount(va, size, out _);
ulong pageStart = va >> PageBits;
ulong pageEnd = pageStart + (ulong)pages;
GetPageBlockRange(pageStart, pageEnd, out ulong startMask, out ulong endMask, out int pageIndex, out int pageEndIndex);
ulong mask = startMask;
while (pageIndex <= pageEndIndex)
{
if (pageIndex == pageEndIndex)
{
mask &= endMask;
}
ref ulong pageRef = ref _pageBitmap[pageIndex++];
ulong pte;
ulong mappedMask;
// Map all 2-bit entries that are unmapped.
do
{
pte = Volatile.Read(ref pageRef);
mappedMask = pte | (pte >> 1);
mappedMask |= (mappedMask & BlockMappedMask) << 1;
mappedMask |= ~mask; // Treat everything outside the range as mapped, thus unchanged.
}
while (Interlocked.CompareExchange(ref pageRef, (pte & mappedMask) | (BlockMappedMask & (~mappedMask)), pte) != pte);
mask = ulong.MaxValue;
}
}
/// <summary>
/// Removes the given address mapping from the page table.
/// </summary>
/// <param name="va">Virtual memory address</param>
/// <param name="size">Size to be unmapped</param>
public readonly void RemoveMapping(ulong va, ulong size)
{
int pages = GetPagesCount(va, size, out _);
ulong pageStart = va >> PageBits;
ulong pageEnd = pageStart + (ulong)pages;
GetPageBlockRange(pageStart, pageEnd, out ulong startMask, out ulong endMask, out int pageIndex, out int pageEndIndex);
startMask = ~startMask;
endMask = ~endMask;
ulong mask = startMask;
while (pageIndex <= pageEndIndex)
{
if (pageIndex == pageEndIndex)
{
mask |= endMask;
}
ref ulong pageRef = ref _pageBitmap[pageIndex++];
ulong pte;
do
{
pte = Volatile.Read(ref pageRef);
}
while (Interlocked.CompareExchange(ref pageRef, pte & mask, pte) != pte);
mask = 0;
}
}
private static void ThrowInvalidMemoryRegionException(string message) => throw new InvalidMemoryRegionException(message);
}
}

View File

@@ -1,3 +1,4 @@
using Ryujinx.Common;
using Ryujinx.Cpu.Signal;
using Ryujinx.Memory;
using Ryujinx.Memory.Tracking;
@@ -8,19 +9,27 @@ namespace Ryujinx.Cpu
{
public class MemoryEhMeilleure : IDisposable
{
private delegate bool TrackingEventDelegate(ulong address, ulong size, bool write);
public delegate ulong TrackingEventDelegate(ulong address, ulong size, bool write);
private readonly MemoryTracking _tracking;
private readonly TrackingEventDelegate _trackingEvent;
private readonly ulong _pageSize;
private readonly ulong _baseAddress;
private readonly ulong _mirrorAddress;
public MemoryEhMeilleure(MemoryBlock addressSpace, MemoryBlock addressSpaceMirror, MemoryTracking tracking)
public MemoryEhMeilleure(MemoryBlock addressSpace, MemoryBlock addressSpaceMirror, MemoryTracking tracking, TrackingEventDelegate trackingEvent = null)
{
_baseAddress = (ulong)addressSpace.Pointer;
ulong endAddress = _baseAddress + addressSpace.Size;
_trackingEvent = tracking.VirtualMemoryEvent;
_tracking = tracking;
_trackingEvent = trackingEvent ?? VirtualMemoryEvent;
_pageSize = MemoryBlock.GetPageSize();
bool added = NativeSignalHandler.AddTrackedRegion((nuint)_baseAddress, (nuint)endAddress, Marshal.GetFunctionPointerForDelegate(_trackingEvent));
if (!added)
@@ -28,7 +37,7 @@ namespace Ryujinx.Cpu
throw new InvalidOperationException("Number of allowed tracked regions exceeded.");
}
if (OperatingSystem.IsWindows())
if (OperatingSystem.IsWindows() && addressSpaceMirror != null)
{
// Add a tracking event with no signal handler for the mirror on Windows.
// The native handler has its own code to check for the partial overlap race when regions are protected by accident,
@@ -46,6 +55,21 @@ namespace Ryujinx.Cpu
}
}
private ulong VirtualMemoryEvent(ulong address, ulong size, bool write)
{
ulong pageSize = _pageSize;
ulong addressAligned = BitUtils.AlignDown(address, pageSize);
ulong endAddressAligned = BitUtils.AlignUp(address + size, pageSize);
ulong sizeAligned = endAddressAligned - addressAligned;
if (_tracking.VirtualMemoryEvent(addressAligned, sizeAligned, write))
{
return _baseAddress + address;
}
return 0;
}
public void Dispose()
{
GC.SuppressFinalize(this);

View File

@@ -143,7 +143,7 @@ namespace Ryujinx.Cpu
}
}
public PrivateMemoryAllocator(int blockAlignment, MemoryAllocationFlags allocationFlags) : base(blockAlignment, allocationFlags)
public PrivateMemoryAllocator(ulong blockAlignment, MemoryAllocationFlags allocationFlags) : base(blockAlignment, allocationFlags)
{
}
@@ -180,10 +180,10 @@ namespace Ryujinx.Cpu
private readonly List<T> _blocks;
private readonly int _blockAlignment;
private readonly ulong _blockAlignment;
private readonly MemoryAllocationFlags _allocationFlags;
public PrivateMemoryAllocatorImpl(int blockAlignment, MemoryAllocationFlags allocationFlags)
public PrivateMemoryAllocatorImpl(ulong blockAlignment, MemoryAllocationFlags allocationFlags)
{
_blocks = new List<T>();
_blockAlignment = blockAlignment;
@@ -212,7 +212,7 @@ namespace Ryujinx.Cpu
}
}
ulong blockAlignedSize = BitUtils.AlignUp(size, (ulong)_blockAlignment);
ulong blockAlignedSize = BitUtils.AlignUp(size, _blockAlignment);
var memory = new MemoryBlock(blockAlignedSize, _allocationFlags);
var newBlock = createBlock(memory, blockAlignedSize);

View File

@@ -70,7 +70,7 @@ namespace Ryujinx.Cpu.Signal
config = new SignalHandlerConfig();
}
public static void InitializeSignalHandler(ulong pageSize, Func<IntPtr, IntPtr, IntPtr> customSignalHandlerFactory = null)
public static void InitializeSignalHandler(Func<IntPtr, IntPtr, IntPtr> customSignalHandlerFactory = null)
{
if (_initialized)
{
@@ -90,7 +90,7 @@ namespace Ryujinx.Cpu.Signal
if (OperatingSystem.IsLinux() || OperatingSystem.IsMacOS())
{
_signalHandlerPtr = MapCode(NativeSignalHandlerGenerator.GenerateUnixSignalHandler(_handlerConfig, rangeStructSize, pageSize));
_signalHandlerPtr = MapCode(NativeSignalHandlerGenerator.GenerateUnixSignalHandler(_handlerConfig, rangeStructSize));
if (customSignalHandlerFactory != null)
{
@@ -107,7 +107,7 @@ namespace Ryujinx.Cpu.Signal
config.StructAddressOffset = 40; // ExceptionInformation1
config.StructWriteOffset = 32; // ExceptionInformation0
_signalHandlerPtr = MapCode(NativeSignalHandlerGenerator.GenerateWindowsSignalHandler(_handlerConfig, rangeStructSize, pageSize));
_signalHandlerPtr = MapCode(NativeSignalHandlerGenerator.GenerateWindowsSignalHandler(_handlerConfig, rangeStructSize));
if (customSignalHandlerFactory != null)
{
@@ -175,5 +175,10 @@ namespace Ryujinx.Cpu.Signal
return false;
}
public static bool SupportsFaultAddressPatching()
{
return NativeSignalHandlerGenerator.SupportsFaultAddressPatchingForHost();
}
}
}

View File

@@ -1,13 +1,10 @@
using Ryujinx.Memory;
using System.Diagnostics;
using System.Numerics;
using System.Threading;
namespace Ryujinx.Cpu
{
public abstract class VirtualMemoryManagerRefCountedBase<TVirtual, TPhysical> : VirtualMemoryManagerBase<TVirtual, TPhysical>, IRefCounted
where TVirtual : IBinaryInteger<TVirtual>
where TPhysical : IBinaryInteger<TPhysical>
public abstract class VirtualMemoryManagerRefCountedBase : VirtualMemoryManagerBase, IRefCounted
{
private int _referenceCount;

View File

@@ -343,11 +343,22 @@ namespace Ryujinx.Graphics.Gpu.Engine.Threed
bool unalignedChanged = _currentSpecState.SetHasUnalignedStorageBuffer(_channel.BufferManager.HasUnalignedStorageBuffers);
if (!_channel.TextureManager.CommitGraphicsBindings(_shaderSpecState) || unalignedChanged)
bool scaleMismatch;
do
{
// Shader must be reloaded. _vtgWritesRtLayer should not change.
UpdateShaderState();
if (!_channel.TextureManager.CommitGraphicsBindings(_shaderSpecState, out scaleMismatch) || unalignedChanged)
{
// Shader must be reloaded. _vtgWritesRtLayer should not change.
UpdateShaderState();
}
if (scaleMismatch)
{
// Binding textures changed scale of the bound render targets, correct the render target scale and rebind.
UpdateRenderTargetState();
}
}
while (scaleMismatch);
_channel.BufferManager.CommitGraphicsBindings(_drawState.DrawIndexed);
}

View File

@@ -46,7 +46,7 @@ namespace Ryujinx.Graphics.Gpu.Image
{
private const int MinCountForDeletion = 32;
private const int MaxCapacity = 2048;
private const ulong MaxTextureSizeCapacity = 512 * 1024 * 1024; // MB;
private const ulong MaxTextureSizeCapacity = 1024 * 1024 * 1024; // MB;
private readonly LinkedList<Texture> _textures;
private ulong _totalSize;

View File

@@ -69,7 +69,7 @@ namespace Ryujinx.Graphics.Gpu.Image
Address = address;
Size = size;
_memoryTracking = physicalMemory.BeginGranularTracking(address, size, ResourceKind.Pool);
_memoryTracking = physicalMemory.BeginGranularTracking(address, size, ResourceKind.Pool, RegionFlags.None);
_memoryTracking.RegisterPreciseAction(address, size, PreciseAction);
_modifiedDelegate = RegionModified;
}

View File

@@ -573,7 +573,7 @@ namespace Ryujinx.Graphics.Gpu.Image
/// <summary>
/// Discards all data for this texture.
/// This clears all dirty flags, modified flags, and pending copies from other textures.
/// This clears all dirty flags and pending copies from other textures.
/// It should be used if the texture data will be fully overwritten by the next use.
/// </summary>
public void DiscardData()

View File

@@ -282,7 +282,7 @@ namespace Ryujinx.Graphics.Gpu.Image
/// <summary>
/// Discards all data for a given texture.
/// This clears all dirty flags, modified flags, and pending copies from other textures.
/// This clears all dirty flags and pending copies from other textures.
/// </summary>
/// <param name="texture">The texture being discarded</param>
public void DiscardData(Texture texture)
@@ -1622,14 +1622,6 @@ namespace Ryujinx.Graphics.Gpu.Image
/// <param name="size">The size of the flushing memory access</param>
public void FlushAction(TextureGroupHandle handle, ulong address, ulong size)
{
// If the page size is larger than 4KB, we will have a lot of false positives for flushing.
// Let's avoid flushing textures that are unlikely to be read from CPU to improve performance
// on those platforms.
if (!_physicalMemory.Supports4KBPages && !Storage.Info.IsLinear && !_context.IsGpuThread())
{
return;
}
// There is a small gap here where the action is removed but _actionRegistered is still 1.
// In this case it will skip registering the action, but here we are already handling it,
// so there shouldn't be any issue as it's the same handler for all actions.

View File

@@ -182,11 +182,10 @@ namespace Ryujinx.Graphics.Gpu.Image
/// <summary>
/// Discards all data for this handle.
/// This clears all dirty flags, modified flags, and pending copies from other handles.
/// This clears all dirty flags and pending copies from other handles.
/// </summary>
public void DiscardData()
{
Modified = false;
DeferredCopy = null;
foreach (RegionHandle handle in Handles)

View File

@@ -360,15 +360,16 @@ namespace Ryujinx.Graphics.Gpu.Image
/// Commits bindings on the graphics pipeline.
/// </summary>
/// <param name="specState">Specialization state for the bound shader</param>
/// <param name="scaleMismatch">True if there is a scale mismatch in the render targets, indicating they must be re-evaluated</param>
/// <returns>True if all bound textures match the current shader specialization state, false otherwise</returns>
public bool CommitGraphicsBindings(ShaderSpecializationState specState)
public bool CommitGraphicsBindings(ShaderSpecializationState specState, out bool scaleMismatch)
{
_texturePoolCache.Tick();
_samplerPoolCache.Tick();
bool result = _gpBindingsManager.CommitBindings(specState);
UpdateRenderTargets();
scaleMismatch = UpdateRenderTargets();
return result;
}
@@ -426,9 +427,12 @@ namespace Ryujinx.Graphics.Gpu.Image
/// <summary>
/// Update host framebuffer attachments based on currently bound render target buffers.
/// </summary>
public void UpdateRenderTargets()
/// <returns>True if there is a scale mismatch in the render targets, indicating they must be re-evaluated</returns>
public bool UpdateRenderTargets()
{
bool anyChanged = false;
float expectedScale = RenderTargetScale;
bool scaleMismatch = false;
Texture dsTexture = _rtDepthStencil;
ITexture hostDsTexture = null;
@@ -448,6 +452,11 @@ namespace Ryujinx.Graphics.Gpu.Image
{
_rtHostDs = hostDsTexture;
anyChanged = true;
if (dsTexture != null && dsTexture.ScaleFactor != expectedScale)
{
scaleMismatch = true;
}
}
for (int index = 0; index < _rtColors.Length; index++)
@@ -470,6 +479,11 @@ namespace Ryujinx.Graphics.Gpu.Image
{
_rtHostColors[index] = hostTexture;
anyChanged = true;
if (texture != null && texture.ScaleFactor != expectedScale)
{
scaleMismatch = true;
}
}
}
@@ -477,6 +491,8 @@ namespace Ryujinx.Graphics.Gpu.Image
{
_context.Renderer.Pipeline.SetRenderTargets(_rtHostColors, _rtHostDs);
}
return scaleMismatch;
}
/// <summary>

View File

@@ -128,13 +128,13 @@ namespace Ryujinx.Graphics.Gpu.Memory
if (_useGranular)
{
_memoryTrackingGranular = physicalMemory.BeginGranularTracking(address, size, ResourceKind.Buffer, baseHandles);
_memoryTrackingGranular = physicalMemory.BeginGranularTracking(address, size, ResourceKind.Buffer, RegionFlags.UnalignedAccess, baseHandles);
_memoryTrackingGranular.RegisterPreciseAction(address, size, PreciseAction);
}
else
{
_memoryTracking = physicalMemory.BeginTracking(address, size, ResourceKind.Buffer);
_memoryTracking = physicalMemory.BeginTracking(address, size, ResourceKind.Buffer, RegionFlags.UnalignedAccess);
if (baseHandles != null)
{

View File

@@ -23,11 +23,6 @@ namespace Ryujinx.Graphics.Gpu.Memory
private readonly IVirtualMemoryManagerTracked _cpuMemory;
private int _referenceCount;
/// <summary>
/// Indicates whenever the memory manager supports 4KB pages.
/// </summary>
public bool Supports4KBPages => _cpuMemory.Supports4KBPages;
/// <summary>
/// In-memory shader cache.
/// </summary>
@@ -368,10 +363,11 @@ namespace Ryujinx.Graphics.Gpu.Memory
/// <param name="address">CPU virtual address of the region</param>
/// <param name="size">Size of the region</param>
/// <param name="kind">Kind of the resource being tracked</param>
/// <param name="flags">Region flags</param>
/// <returns>The memory tracking handle</returns>
public RegionHandle BeginTracking(ulong address, ulong size, ResourceKind kind)
public RegionHandle BeginTracking(ulong address, ulong size, ResourceKind kind, RegionFlags flags = RegionFlags.None)
{
return _cpuMemory.BeginTracking(address, size, (int)kind);
return _cpuMemory.BeginTracking(address, size, (int)kind, flags);
}
/// <summary>
@@ -408,12 +404,19 @@ namespace Ryujinx.Graphics.Gpu.Memory
/// <param name="address">CPU virtual address of the region</param>
/// <param name="size">Size of the region</param>
/// <param name="kind">Kind of the resource being tracked</param>
/// <param name="flags">Region flags</param>
/// <param name="handles">Handles to inherit state from or reuse</param>
/// <param name="granularity">Desired granularity of write tracking</param>
/// <returns>The memory tracking handle</returns>
public MultiRegionHandle BeginGranularTracking(ulong address, ulong size, ResourceKind kind, IEnumerable<IRegionHandle> handles = null, ulong granularity = 4096)
public MultiRegionHandle BeginGranularTracking(
ulong address,
ulong size,
ResourceKind kind,
RegionFlags flags = RegionFlags.None,
IEnumerable<IRegionHandle> handles = null,
ulong granularity = 4096)
{
return _cpuMemory.BeginGranularTracking(address, size, handles, granularity, (int)kind);
return _cpuMemory.BeginGranularTracking(address, size, handles, granularity, (int)kind, flags);
}
/// <summary>

View File

@@ -173,7 +173,7 @@ namespace Ryujinx.Graphics.Gpu.Memory
ShrinkOverlapsBufferIfNeeded();
// If the the range is not properly aligned for sparse mapping,
// If the range is not properly aligned for sparse mapping,
// let's just force it to a single range.
// This might cause issues in some applications that uses sparse
// mappings.

View File

@@ -22,7 +22,7 @@ namespace Ryujinx.Graphics.Gpu.Shader.DiskCache
private const ushort FileFormatVersionMajor = 1;
private const ushort FileFormatVersionMinor = 2;
private const uint FileFormatVersionPacked = ((uint)FileFormatVersionMajor << 16) | FileFormatVersionMinor;
private const uint CodeGenVersion = 6455;
private const uint CodeGenVersion = 6462;
private const string SharedTocFileName = "shared.toc";
private const string SharedDataFileName = "shared.data";

View File

@@ -356,6 +356,16 @@ namespace Ryujinx.Graphics.Shader.CodeGen.Spirv
context.AddGlobalVariable(perVertexInputVariable);
context.Inputs.Add(new IoDefinition(StorageKind.Input, IoVariable.Position), perVertexInputVariable);
if (context.Definitions.Stage == ShaderStage.Geometry &&
context.Definitions.GpPassthrough &&
context.HostCapabilities.SupportsGeometryShaderPassthrough)
{
context.MemberDecorate(perVertexInputStructType, 0, Decoration.PassthroughNV);
context.MemberDecorate(perVertexInputStructType, 1, Decoration.PassthroughNV);
context.MemberDecorate(perVertexInputStructType, 2, Decoration.PassthroughNV);
context.MemberDecorate(perVertexInputStructType, 3, Decoration.PassthroughNV);
}
}
var perVertexOutputStructType = CreatePerVertexStructType(context);

View File

@@ -322,7 +322,7 @@ namespace Ryujinx.Graphics.Shader.Translation.Optimizations
Operand lhs = operation.GetSource(0);
Operand rhs = operation.GetSource(1);
// Check LHS of the the main multiplication operation. We expect an input being multiplied by gl_FragCoord.w.
// Check LHS of the main multiplication operation. We expect an input being multiplied by gl_FragCoord.w.
if (lhs.AsgOp is not Operation attrMulOp || attrMulOp.Inst != (Instruction.FP32 | Instruction.Multiply))
{
return;

View File

@@ -981,6 +981,7 @@ namespace Ryujinx.Graphics.Vulkan
_bindingBarriersDirty = true;
_newState.PipelineLayout = internalProgram.PipelineLayout;
_newState.HasTessellationControlShader = internalProgram.HasTessellationControlShader;
_newState.StagesCount = (uint)stages.Length;
stages.CopyTo(_newState.Stages.AsSpan()[..stages.Length]);

View File

@@ -311,6 +311,7 @@ namespace Ryujinx.Graphics.Vulkan
set => Internal.Id9 = (Internal.Id9 & 0xFFFFFFFFFFFFFFBF) | ((value ? 1UL : 0UL) << 6);
}
public bool HasTessellationControlShader;
public NativeArray<PipelineShaderStageCreateInfo> Stages;
public PipelineLayout PipelineLayout;
public SpecData SpecializationData;
@@ -319,6 +320,7 @@ namespace Ryujinx.Graphics.Vulkan
public void Initialize()
{
HasTessellationControlShader = false;
Stages = new NativeArray<PipelineShaderStageCreateInfo>(Constants.MaxShaderStages);
AdvancedBlendSrcPreMultiplied = true;
@@ -419,6 +421,15 @@ namespace Ryujinx.Graphics.Vulkan
PVertexBindingDescriptions = pVertexBindingDescriptions,
};
// Using patches topology without a tessellation shader is invalid.
// If we find such a case, return null pipeline to skip the draw.
if (Topology == PrimitiveTopology.PatchList && !HasTessellationControlShader)
{
program.AddGraphicsPipeline(ref Internal, null);
return null;
}
bool primitiveRestartEnable = PrimitiveRestartEnable;
bool topologySupportsRestart;

View File

@@ -10,8 +10,8 @@ namespace Ryujinx.Graphics.Vulkan.Queries
class BufferedQuery : IDisposable
{
private const int MaxQueryRetries = 5000;
private const long DefaultValue = -1;
private const long DefaultValueInt = 0xFFFFFFFF;
private const long DefaultValue = unchecked((long)0xFFFFFFFEFFFFFFFE);
private const long DefaultValueInt = 0xFFFFFFFE;
private const ulong HighMask = 0xFFFFFFFF00000000;
private readonly Vk _api;

View File

@@ -122,7 +122,6 @@ namespace Ryujinx.Graphics.Vulkan
gd.Api.CreateRenderPass(device, renderPassCreateInfo, null, out var renderPass).ThrowOnError();
_renderPass?.Dispose();
_renderPass = new Auto<DisposableRenderPass>(new DisposableRenderPass(gd.Api, device, renderPass));
}
@@ -162,7 +161,7 @@ namespace Ryujinx.Graphics.Vulkan
public void Dispose()
{
// Dispose all framebuffers
// Dispose all framebuffers.
foreach (var fb in _framebuffers.Values)
{
@@ -175,6 +174,10 @@ namespace Ryujinx.Graphics.Vulkan
{
texture.RemoveRenderPass(_key);
}
// Dispose render pass.
_renderPass.Dispose();
}
}
}

View File

@@ -21,6 +21,7 @@ namespace Ryujinx.Graphics.Vulkan
public bool HasMinimalLayout { get; }
public bool UsePushDescriptors { get; }
public bool IsCompute { get; }
public bool HasTessellationControlShader => (Stages & (1u << 3)) != 0;
public uint Stages { get; }
@@ -111,8 +112,8 @@ namespace Ryujinx.Graphics.Vulkan
bool usePushDescriptors = !isMinimal &&
VulkanConfiguration.UsePushDescriptors &&
_gd.Capabilities.SupportsPushDescriptors &&
!_gd.IsNvidiaPreTuring &&
!IsCompute &&
!HasPushDescriptorsBug(gd) &&
CanUsePushDescriptors(gd, resourceLayout, IsCompute);
ReadOnlyCollection<ResourceDescriptorCollection> sets = usePushDescriptors ?
@@ -147,6 +148,12 @@ namespace Ryujinx.Graphics.Vulkan
_firstBackgroundUse = !fromCache;
}
private static bool HasPushDescriptorsBug(VulkanRenderer gd)
{
// Those GPUs/drivers do not work properly with push descriptors, so we must force disable them.
return gd.IsNvidiaPreTuring || (gd.IsIntelArc && gd.IsIntelWindows);
}
private static bool CanUsePushDescriptors(VulkanRenderer gd, ResourceLayout layout, bool isCompute)
{
// If binding 3 is immediately used, use an alternate set of reserved bindings.
@@ -455,6 +462,7 @@ namespace Ryujinx.Graphics.Vulkan
stages[i] = _shaders[i].GetInfo();
}
pipeline.HasTessellationControlShader = HasTessellationControlShader;
pipeline.StagesCount = (uint)_shaders.Length;
pipeline.PipelineLayout = PipelineLayout;

View File

@@ -4,6 +4,7 @@ using Silk.NET.Vulkan;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading;
using Format = Ryujinx.Graphics.GAL.Format;
using VkBuffer = Silk.NET.Vulkan.Buffer;
using VkFormat = Silk.NET.Vulkan.Format;
@@ -36,7 +37,8 @@ namespace Ryujinx.Graphics.Vulkan
public int FirstLayer { get; }
public int FirstLevel { get; }
public VkFormat VkFormat { get; }
public bool Valid { get; private set; }
private int _isValid;
public bool Valid => Volatile.Read(ref _isValid) != 0;
public TextureView(
VulkanRenderer gd,
@@ -158,7 +160,7 @@ namespace Ryujinx.Graphics.Vulkan
}
}
Valid = true;
_isValid = 1;
}
/// <summary>
@@ -178,7 +180,7 @@ namespace Ryujinx.Graphics.Vulkan
VkFormat = format;
Valid = true;
_isValid = 1;
}
public Auto<DisposableImage> GetImage()
@@ -1017,10 +1019,11 @@ namespace Ryujinx.Graphics.Vulkan
{
if (disposing)
{
Valid = false;
if (_gd.Textures.Remove(this))
bool wasValid = Interlocked.Exchange(ref _isValid, 0) != 0;
if (wasValid)
{
_gd.Textures.Remove(this);
_imageView.Dispose();
_imageView2dArray?.Dispose();
@@ -1034,7 +1037,7 @@ namespace Ryujinx.Graphics.Vulkan
_imageViewDraw.Dispose();
}
Storage.DecrementViewsCount();
Storage?.DecrementViewsCount();
if (_renderPasses != null)
{
@@ -1045,22 +1048,22 @@ namespace Ryujinx.Graphics.Vulkan
pass.Dispose();
}
}
if (_selfManagedViews != null)
{
foreach (var view in _selfManagedViews.Values)
{
view.Dispose();
}
_selfManagedViews = null;
}
}
}
}
public void Dispose()
{
if (_selfManagedViews != null)
{
foreach (var view in _selfManagedViews.Values)
{
view.Dispose();
}
_selfManagedViews = null;
}
Dispose(true);
}

View File

@@ -1,3 +1,4 @@
using Silk.NET.Vulkan;
using System.Text.RegularExpressions;
namespace Ryujinx.Graphics.Vulkan
@@ -61,5 +62,36 @@ namespace Ryujinx.Graphics.Vulkan
_ => $"0x{id:X}",
};
}
public static string GetFriendlyDriverName(DriverId id)
{
return id switch
{
DriverId.AmdProprietary => "AMD",
DriverId.AmdOpenSource => "AMD (Open)",
DriverId.ArmProprietary => "ARM",
DriverId.BroadcomProprietary => "Broadcom",
DriverId.CoreaviProprietary => "CoreAVI",
DriverId.GgpProprietary => "GGP",
DriverId.GoogleSwiftshader => "SwiftShader",
DriverId.ImaginationProprietary => "Imagination",
DriverId.IntelOpenSourceMesa => "Intel (Open)",
DriverId.IntelProprietaryWindows => "Intel",
DriverId.JuiceProprietary => "Juice",
DriverId.MesaDozen => "Dozen",
DriverId.MesaLlvmpipe => "LLVMpipe",
DriverId.MesaPanvk => "PanVK",
DriverId.MesaRadv => "RADV",
DriverId.MesaTurnip => "Turnip",
DriverId.MesaV3DV => "V3DV",
DriverId.MesaVenus => "Venus",
DriverId.Moltenvk => "MoltenVK",
DriverId.NvidiaProprietary => "NVIDIA",
DriverId.QualcommProprietary => "Qualcomm",
DriverId.SamsungProprietary => "Samsung",
DriverId.VerisiliconProprietary => "Verisilicon",
_ => id.ToString(),
};
}
}
}

View File

@@ -136,7 +136,7 @@ namespace Ryujinx.Graphics.Vulkan
{
instance.EnumeratePhysicalDevices(out var physicalDevices).ThrowOnError();
// First we try to pick the the user preferred GPU.
// First we try to pick the user preferred GPU.
for (int i = 0; i < physicalDevices.Length; i++)
{
if (IsPreferredAndSuitableDevice(api, physicalDevices[i], surface, preferredGpuId))

View File

@@ -87,6 +87,7 @@ namespace Ryujinx.Graphics.Vulkan
internal bool IsIntelWindows { get; private set; }
internal bool IsAmdGcn { get; private set; }
internal bool IsNvidiaPreTuring { get; private set; }
internal bool IsIntelArc { get; private set; }
internal bool IsMoltenVk { get; private set; }
internal bool IsTBDR { get; private set; }
internal bool IsSharedMemory { get; private set; }
@@ -310,6 +311,50 @@ namespace Ryujinx.Graphics.Vulkan
ref var properties = ref properties2.Properties;
var hasDriverProperties = _physicalDevice.TryGetPhysicalDeviceDriverPropertiesKHR(Api, out var driverProperties);
Vendor = VendorUtils.FromId(properties.VendorID);
IsAmdWindows = Vendor == Vendor.Amd && OperatingSystem.IsWindows();
IsIntelWindows = Vendor == Vendor.Intel && OperatingSystem.IsWindows();
IsTBDR =
Vendor == Vendor.Apple ||
Vendor == Vendor.Qualcomm ||
Vendor == Vendor.ARM ||
Vendor == Vendor.Broadcom ||
Vendor == Vendor.ImgTec;
GpuVendor = VendorUtils.GetNameFromId(properties.VendorID);
GpuDriver = hasDriverProperties && !OperatingSystem.IsMacOS() ?
VendorUtils.GetFriendlyDriverName(driverProperties.DriverID) : GpuVendor; // Fallback to vendor name if driver is unavailable or on MacOS where vendor is preferred.
fixed (byte* deviceName = properties.DeviceName)
{
GpuRenderer = Marshal.PtrToStringAnsi((IntPtr)deviceName);
}
GpuVersion = $"Vulkan v{ParseStandardVulkanVersion(properties.ApiVersion)}, Driver v{ParseDriverVersion(ref properties)}";
IsAmdGcn = !IsMoltenVk && Vendor == Vendor.Amd && VendorUtils.AmdGcnRegex().IsMatch(GpuRenderer);
if (Vendor == Vendor.Nvidia)
{
var match = VendorUtils.NvidiaConsumerClassRegex().Match(GpuRenderer);
if (match != null && int.TryParse(match.Groups[2].Value, out int gpuNumber))
{
IsNvidiaPreTuring = gpuNumber < 2000;
}
else if (GpuDriver.Contains("TITAN") && !GpuDriver.Contains("RTX"))
{
IsNvidiaPreTuring = true;
}
}
else if (Vendor == Vendor.Intel)
{
IsIntelArc = GpuRenderer.StartsWith("Intel(R) Arc(TM)");
}
ulong minResourceAlignment = Math.Max(
Math.Max(
properties.Limits.MinStorageBufferOffsetAlignment,
@@ -732,56 +777,15 @@ namespace Ryujinx.Graphics.Vulkan
return ParseStandardVulkanVersion(driverVersionRaw);
}
private unsafe void PrintGpuInformation()
{
var properties = _physicalDevice.PhysicalDeviceProperties;
var hasDriverProperties = _physicalDevice.TryGetPhysicalDeviceDriverPropertiesKHR(Api, out var driverProperties);
string vendorName = VendorUtils.GetNameFromId(properties.VendorID);
Vendor = VendorUtils.FromId(properties.VendorID);
IsAmdWindows = Vendor == Vendor.Amd && OperatingSystem.IsWindows();
IsIntelWindows = Vendor == Vendor.Intel && OperatingSystem.IsWindows();
IsTBDR =
Vendor == Vendor.Apple ||
Vendor == Vendor.Qualcomm ||
Vendor == Vendor.ARM ||
Vendor == Vendor.Broadcom ||
Vendor == Vendor.ImgTec;
GpuVendor = vendorName;
GpuDriver = hasDriverProperties ? Marshal.PtrToStringAnsi((IntPtr)driverProperties.DriverName) : vendorName; // Fall back to vendor name if driver name isn't available.
GpuRenderer = Marshal.PtrToStringAnsi((IntPtr)properties.DeviceName);
GpuVersion = $"Vulkan v{ParseStandardVulkanVersion(properties.ApiVersion)}, Driver v{ParseDriverVersion(ref properties)}";
IsAmdGcn = !IsMoltenVk && Vendor == Vendor.Amd && VendorUtils.AmdGcnRegex().IsMatch(GpuRenderer);
if (Vendor == Vendor.Nvidia)
{
var match = VendorUtils.NvidiaConsumerClassRegex().Match(GpuRenderer);
if (match != null && int.TryParse(match.Groups[2].Value, out int gpuNumber))
{
IsNvidiaPreTuring = gpuNumber < 2000;
}
else if (GpuDriver.Contains("TITAN") && !GpuDriver.Contains("RTX"))
{
IsNvidiaPreTuring = true;
}
}
Logger.Notice.Print(LogClass.Gpu, $"{GpuVendor} {GpuRenderer} ({GpuVersion})");
}
internal PrimitiveTopology TopologyRemap(PrimitiveTopology topology)
{
return topology switch
{
PrimitiveTopology.Quads => PrimitiveTopology.Triangles,
PrimitiveTopology.QuadStrip => PrimitiveTopology.TriangleStrip,
PrimitiveTopology.TriangleFan => Capabilities.PortabilitySubset.HasFlag(PortabilitySubsetFlags.NoTriangleFans) ? PrimitiveTopology.Triangles : topology,
PrimitiveTopology.TriangleFan or PrimitiveTopology.Polygon => Capabilities.PortabilitySubset.HasFlag(PortabilitySubsetFlags.NoTriangleFans)
? PrimitiveTopology.Triangles
: topology,
_ => topology,
};
}
@@ -791,11 +795,16 @@ namespace Ryujinx.Graphics.Vulkan
return topology switch
{
PrimitiveTopology.Quads => true,
PrimitiveTopology.TriangleFan => Capabilities.PortabilitySubset.HasFlag(PortabilitySubsetFlags.NoTriangleFans),
PrimitiveTopology.TriangleFan or PrimitiveTopology.Polygon => Capabilities.PortabilitySubset.HasFlag(PortabilitySubsetFlags.NoTriangleFans),
_ => false,
};
}
private void PrintGpuInformation()
{
Logger.Notice.Print(LogClass.Gpu, $"{GpuVendor} {GpuRenderer} ({GpuVersion})");
}
public void Initialize(GraphicsDebugLevel logLevel)
{
SetupContext(logLevel);

View File

@@ -104,20 +104,15 @@ namespace Ryujinx.HLE.FileSystem
foreach (StorageId storageId in Enum.GetValues<StorageId>())
{
string contentDirectory = null;
string contentPathString = null;
string registeredDirectory = null;
try
{
contentPathString = ContentPath.GetContentPath(storageId);
contentDirectory = ContentPath.GetRealPath(contentPathString);
registeredDirectory = Path.Combine(contentDirectory, "registered");
}
catch (NotSupportedException)
if (!ContentPath.TryGetContentPath(storageId, out var contentPathString))
{
continue;
}
if (!ContentPath.TryGetRealPath(contentPathString, out var contentDirectory))
{
continue;
}
var registeredDirectory = Path.Combine(contentDirectory, "registered");
Directory.CreateDirectory(registeredDirectory);
@@ -471,8 +466,8 @@ namespace Ryujinx.HLE.FileSystem
public void InstallFirmware(string firmwareSource)
{
string contentPathString = ContentPath.GetContentPath(StorageId.BuiltInSystem);
string contentDirectory = ContentPath.GetRealPath(contentPathString);
ContentPath.TryGetContentPath(StorageId.BuiltInSystem, out var contentPathString);
ContentPath.TryGetRealPath(contentPathString, out var contentDirectory);
string registeredDirectory = Path.Combine(contentDirectory, "registered");
string temporaryDirectory = Path.Combine(contentDirectory, "temp");

View File

@@ -26,17 +26,19 @@ namespace Ryujinx.HLE.FileSystem
public const string Nintendo = "Nintendo";
public const string Contents = "Contents";
public static string GetRealPath(string switchContentPath)
public static bool TryGetRealPath(string switchContentPath, out string realPath)
{
return switchContentPath switch
realPath = switchContentPath switch
{
SystemContent => Path.Combine(AppDataManager.BaseDirPath, SystemNandPath, Contents),
UserContent => Path.Combine(AppDataManager.BaseDirPath, UserNandPath, Contents),
SdCardContent => Path.Combine(GetSdCardPath(), Nintendo, Contents),
System => Path.Combine(AppDataManager.BaseDirPath, SystemNandPath),
User => Path.Combine(AppDataManager.BaseDirPath, UserNandPath),
_ => throw new NotSupportedException($"Content Path \"`{switchContentPath}`\" is not supported."),
_ => null,
};
return realPath != null;
}
public static string GetContentPath(ContentStorageId contentStorageId)
@@ -50,15 +52,17 @@ namespace Ryujinx.HLE.FileSystem
};
}
public static string GetContentPath(StorageId storageId)
public static bool TryGetContentPath(StorageId storageId, out string contentPath)
{
return storageId switch
contentPath = storageId switch
{
StorageId.BuiltInSystem => SystemContent,
StorageId.BuiltInUser => UserContent,
StorageId.SdCard => SdCardContent,
_ => throw new NotSupportedException($"Storage Id \"`{storageId}`\" is not supported."),
_ => null,
};
return contentPath != null;
}
public static StorageId GetStorageId(string contentPathString)

View File

@@ -466,7 +466,7 @@ namespace Ryujinx.HLE.HOS.Applets.SoftwareKeyboard
private void DrawPadButton(IImageProcessingContext context, PointF point, Image icon, string label, bool pressed, bool enabled)
{
// Use relative positions so we can center the the entire drawing later.
// Use relative positions so we can center the entire drawing later.
float iconX = 0;
float iconY = 0;
@@ -522,7 +522,7 @@ namespace Ryujinx.HLE.HOS.Applets.SoftwareKeyboard
{
var labelRectangle = MeasureString(ControllerToggleText, _labelsTextFont);
// Use relative positions so we can center the the entire drawing later.
// Use relative positions so we can center the entire drawing later.
float keyWidth = _keyModeIcon.Width;
float keyHeight = _keyModeIcon.Height;

View File

@@ -72,9 +72,10 @@ namespace Ryujinx.HLE.HOS
AddressSpace addressSpace = null;
if (mode == MemoryManagerMode.HostMapped || mode == MemoryManagerMode.HostMappedUnsafe)
// We want to use host tracked mode if the host page size is > 4KB.
if ((mode == MemoryManagerMode.HostMapped || mode == MemoryManagerMode.HostMappedUnsafe) && MemoryBlock.GetPageSize() <= 0x1000)
{
if (!AddressSpace.TryCreate(context.Memory, addressSpaceSize, MemoryBlock.GetPageSize() == MemoryManagerHostMapped.PageSize, out addressSpace))
if (!AddressSpace.TryCreate(context.Memory, addressSpaceSize, out addressSpace))
{
Logger.Warning?.Print(LogClass.Cpu, "Address space creation failed, falling back to software page table");
@@ -91,13 +92,21 @@ namespace Ryujinx.HLE.HOS
case MemoryManagerMode.HostMapped:
case MemoryManagerMode.HostMappedUnsafe:
if (addressSpaceSize != addressSpace.AddressSpaceSize)
if (addressSpace == null)
{
Logger.Warning?.Print(LogClass.Emulation, $"Allocated address space (0x{addressSpace.AddressSpaceSize:X}) is smaller than guest application requirements (0x{addressSpaceSize:X})");
var memoryManagerHostTracked = new MemoryManagerHostTracked(context.Memory, addressSpaceSize, mode == MemoryManagerMode.HostMappedUnsafe, invalidAccessHandler);
processContext = new ArmProcessContext<MemoryManagerHostTracked>(pid, cpuEngine, _gpu, memoryManagerHostTracked, addressSpaceSize, for64Bit);
}
else
{
if (addressSpaceSize != addressSpace.AddressSpaceSize)
{
Logger.Warning?.Print(LogClass.Emulation, $"Allocated address space (0x{addressSpace.AddressSpaceSize:X}) is smaller than guest application requirements (0x{addressSpaceSize:X})");
}
var memoryManagerHostMapped = new MemoryManagerHostMapped(addressSpace, mode == MemoryManagerMode.HostMappedUnsafe, invalidAccessHandler);
processContext = new ArmProcessContext<MemoryManagerHostMapped>(pid, cpuEngine, _gpu, memoryManagerHostMapped, addressSpace.AddressSpaceSize, for64Bit);
var memoryManagerHostMapped = new MemoryManagerHostMapped(addressSpace, mode == MemoryManagerMode.HostMappedUnsafe, invalidAccessHandler);
processContext = new ArmProcessContext<MemoryManagerHostMapped>(pid, cpuEngine, _gpu, memoryManagerHostMapped, addressSpace.AddressSpaceSize, for64Bit);
}
break;
default:

View File

@@ -11,7 +11,7 @@ namespace Ryujinx.HLE.HOS.Kernel.Memory
{
private readonly IVirtualMemoryManager _cpuMemory;
protected override bool Supports4KBPages => _cpuMemory.Supports4KBPages;
protected override bool UsesPrivateAllocations => _cpuMemory.UsesPrivateAllocations;
public KPageTable(KernelContext context, IVirtualMemoryManager cpuMemory, ulong reservedAddressSpaceSize) : base(context, reservedAddressSpaceSize)
{
@@ -165,6 +165,29 @@ namespace Ryujinx.HLE.HOS.Kernel.Memory
/// <inheritdoc/>
protected override Result MapForeign(IEnumerable<HostMemoryRange> regions, ulong va, ulong size)
{
ulong backingStart = (ulong)Context.Memory.Pointer;
ulong backingEnd = backingStart + Context.Memory.Size;
KPageList pageList = new();
foreach (HostMemoryRange region in regions)
{
// If the range is inside the physical memory, it is shared and we should increment the page count,
// otherwise it is private and we don't need to increment the page count.
if (region.Address >= backingStart && region.Address < backingEnd)
{
pageList.AddRange(region.Address - backingStart + DramMemoryMap.DramBase, region.Size / PageSize);
}
}
using var scopedPageList = new KScopedPageList(Context.MemoryManager, pageList);
foreach (var pageNode in pageList)
{
Context.CommitMemory(pageNode.Address - DramMemoryMap.DramBase, pageNode.PagesCount * PageSize);
}
ulong offset = 0;
foreach (var region in regions)
@@ -174,6 +197,8 @@ namespace Ryujinx.HLE.HOS.Kernel.Memory
offset += region.Size;
}
scopedPageList.SignalSuccess();
return Result.Success;
}

View File

@@ -32,7 +32,7 @@ namespace Ryujinx.HLE.HOS.Kernel.Memory
private const int MaxBlocksNeededForInsertion = 2;
protected readonly KernelContext Context;
protected virtual bool Supports4KBPages => true;
protected virtual bool UsesPrivateAllocations => false;
public ulong AddrSpaceStart { get; private set; }
public ulong AddrSpaceEnd { get; private set; }
@@ -1947,17 +1947,17 @@ namespace Ryujinx.HLE.HOS.Kernel.Memory
Result result;
if (srcPageTable.Supports4KBPages)
if (srcPageTable.UsesPrivateAllocations)
{
result = MapForeign(srcPageTable.GetHostRegions(addressRounded, alignedSize), currentVa, alignedSize);
}
else
{
KPageList pageList = new();
srcPageTable.GetPhysicalRegions(addressRounded, alignedSize, pageList);
result = MapPages(currentVa, pageList, permission, MemoryMapFlags.None);
}
else
{
result = MapForeign(srcPageTable.GetHostRegions(addressRounded, alignedSize), currentVa, alignedSize);
}
if (result != Result.Success)
{

View File

@@ -2,7 +2,6 @@ using Ryujinx.Common;
using Ryujinx.HLE.HOS.Kernel.Common;
using Ryujinx.HLE.HOS.Kernel.Process;
using Ryujinx.Horizon.Common;
using Ryujinx.Memory;
namespace Ryujinx.HLE.HOS.Kernel.Memory
{
@@ -49,17 +48,7 @@ namespace Ryujinx.HLE.HOS.Kernel.Memory
return KernelResult.InvalidPermission;
}
// On platforms with page size > 4 KB, this can fail due to the address not being page aligned,
// we can return an error to force the application to retry with a different address.
try
{
return memoryManager.MapPages(address, _pageList, MemoryState.SharedMemory, permission);
}
catch (InvalidMemoryRegionException)
{
return KernelResult.InvalidMemState;
}
return memoryManager.MapPages(address, _pageList, MemoryState.SharedMemory, permission);
}
public Result UnmapFromProcess(KPageTableBase memoryManager, ulong address, ulong size, KProcess process)

View File

@@ -173,36 +173,16 @@ namespace Ryujinx.HLE.HOS
if (StrEquals(RomfsDir, modDir.Name))
{
bool enabled;
try
{
var modData = modMetadata.Mods.Find(x => modDir.FullName.Contains(x.Path));
enabled = modData.Enabled;
}
catch
{
// Mod is not in the list yet. New mods should be enabled by default.
enabled = true;
}
var modData = modMetadata.Mods.Find(x => modDir.FullName.Contains(x.Path));
var enabled = modData?.Enabled ?? true;
mods.RomfsDirs.Add(mod = new Mod<DirectoryInfo>(dir.Name, modDir, enabled));
types.Append('R');
}
else if (StrEquals(ExefsDir, modDir.Name))
{
bool enabled;
try
{
var modData = modMetadata.Mods.Find(x => modDir.FullName.Contains(x.Path));
enabled = modData.Enabled;
}
catch
{
// Mod is not in the list yet. New mods should be enabled by default.
enabled = true;
}
var modData = modMetadata.Mods.Find(x => modDir.FullName.Contains(x.Path));
var enabled = modData?.Enabled ?? true;
mods.ExefsDirs.Add(mod = new Mod<DirectoryInfo>(dir.Name, modDir, enabled));
types.Append('E');
@@ -218,7 +198,7 @@ namespace Ryujinx.HLE.HOS
if (types.Length > 0)
{
Logger.Info?.Print(LogClass.ModLoader, $"Found mod '{mod.Name}' [{types}]");
Logger.Info?.Print(LogClass.ModLoader, $"Found {(mod.Enabled ? "enabled" : "disabled")} mod '{mod.Name}' [{types}]");
}
}
}

View File

@@ -1,39 +0,0 @@
using Ryujinx.Common.Logging;
using Ryujinx.HLE.HOS.Services.Ptm.Ts.Types;
namespace Ryujinx.HLE.HOS.Services.Ptm.Ts
{
[Service("ts")]
class IMeasurementServer : IpcService
{
private const uint DefaultTemperature = 42u;
public IMeasurementServer(ServiceCtx context) { }
[CommandCmif(1)]
// GetTemperature(Location location) -> u32
public ResultCode GetTemperature(ServiceCtx context)
{
Location location = (Location)context.RequestData.ReadByte();
Logger.Stub?.PrintStub(LogClass.ServicePtm, new { location });
context.ResponseData.Write(DefaultTemperature);
return ResultCode.Success;
}
[CommandCmif(3)]
// GetTemperatureMilliC(Location location) -> u32
public ResultCode GetTemperatureMilliC(ServiceCtx context)
{
Location location = (Location)context.RequestData.ReadByte();
Logger.Stub?.PrintStub(LogClass.ServicePtm, new { location });
context.ResponseData.Write(DefaultTemperature * 1000);
return ResultCode.Success;
}
}
}

View File

@@ -0,0 +1,63 @@
using Ryujinx.Common.Logging;
using Ryujinx.Horizon.Common;
using Ryujinx.Horizon.Sdk.Sf;
using Ryujinx.Horizon.Sdk.Ts;
using Ryujinx.Horizon.Ts.Ipc;
namespace Ryujinx.Horizon.Ptm.Ipc
{
partial class MeasurementServer : IMeasurementServer
{
// NOTE: Values are randomly choosen.
public const int DefaultTemperature = 42;
public const int MinimumTemperature = 0;
public const int MaximumTemperature = 100;
[CmifCommand(0)] // 1.0.0-16.1.0
public Result GetTemperatureRange(out int minimumTemperature, out int maximumTemperature, Location location)
{
Logger.Stub?.PrintStub(LogClass.ServicePtm, new { location });
minimumTemperature = MinimumTemperature;
maximumTemperature = MaximumTemperature;
return Result.Success;
}
[CmifCommand(1)] // 1.0.0-16.1.0
public Result GetTemperature(out int temperature, Location location)
{
Logger.Stub?.PrintStub(LogClass.ServicePtm, new { location });
temperature = DefaultTemperature;
return Result.Success;
}
[CmifCommand(2)] // 1.0.0-13.2.1
public Result SetMeasurementMode(Location location, byte measurementMode)
{
Logger.Stub?.PrintStub(LogClass.ServicePtm, new { location, measurementMode });
return Result.Success;
}
[CmifCommand(3)] // 1.0.0-13.2.1
public Result GetTemperatureMilliC(out int temperatureMilliC, Location location)
{
Logger.Stub?.PrintStub(LogClass.ServicePtm, new { location });
temperatureMilliC = DefaultTemperature * 1000;
return Result.Success;
}
[CmifCommand(4)] // 8.0.0+
public Result OpenSession(out ISession session, DeviceCode deviceCode)
{
session = new Session(deviceCode);
return Result.Success;
}
}
}

View File

@@ -0,0 +1,47 @@
using Ryujinx.Common.Logging;
using Ryujinx.Horizon.Common;
using Ryujinx.Horizon.Ptm.Ipc;
using Ryujinx.Horizon.Sdk.Sf;
using Ryujinx.Horizon.Sdk.Ts;
namespace Ryujinx.Horizon.Ts.Ipc
{
partial class Session : ISession
{
private readonly DeviceCode _deviceCode;
public Session(DeviceCode deviceCode)
{
_deviceCode = deviceCode;
}
[CmifCommand(0)]
public Result GetTemperatureRange(out int minimumTemperature, out int maximumTemperature)
{
Logger.Stub?.PrintStub(LogClass.ServicePtm, new { _deviceCode });
minimumTemperature = MeasurementServer.MinimumTemperature;
maximumTemperature = MeasurementServer.MaximumTemperature;
return Result.Success;
}
[CmifCommand(2)]
public Result SetMeasurementMode(byte measurementMode)
{
Logger.Stub?.PrintStub(LogClass.ServicePtm, new { _deviceCode, measurementMode });
return Result.Success;
}
[CmifCommand(4)]
public Result GetTemperature(out int temperature)
{
Logger.Stub?.PrintStub(LogClass.ServicePtm, new { _deviceCode });
temperature = MeasurementServer.DefaultTemperature;
return Result.Success;
}
}
}

View File

@@ -0,0 +1,44 @@
using Ryujinx.Horizon.Ptm.Ipc;
using Ryujinx.Horizon.Sdk.Sf.Hipc;
using Ryujinx.Horizon.Sdk.Sm;
namespace Ryujinx.Horizon.Ptm
{
class TsIpcServer
{
private const int MaxSessionsCount = 4;
private const int PointerBufferSize = 0;
private const int MaxDomains = 0;
private const int MaxDomainObjects = 0;
private const int MaxPortsCount = 1;
private static readonly ManagerOptions _managerOptions = new(PointerBufferSize, MaxDomains, MaxDomainObjects, false);
private SmApi _sm;
private ServerManager _serverManager;
public void Initialize()
{
HeapAllocator allocator = new();
_sm = new SmApi();
_sm.Initialize().AbortOnFailure();
_serverManager = new ServerManager(allocator, _sm, MaxPortsCount, _managerOptions, MaxSessionsCount);
_serverManager.RegisterObjectForServer(new MeasurementServer(), ServiceName.Encode("ts"), MaxSessionsCount);
}
public void ServiceRequests()
{
_serverManager.ServiceRequests();
}
public void Shutdown()
{
_serverManager.Dispose();
_sm.Dispose();
}
}
}

View File

@@ -0,0 +1,17 @@
namespace Ryujinx.Horizon.Ptm
{
class TsMain : IService
{
public static void Main(ServiceTable serviceTable)
{
TsIpcServer ipcServer = new();
ipcServer.Initialize();
serviceTable.SignalServiceReady();
ipcServer.ServiceRequests();
ipcServer.Shutdown();
}
}
}

View File

@@ -0,0 +1,8 @@
namespace Ryujinx.Horizon.Sdk.Ts
{
enum DeviceCode : uint
{
Internal = 0x41000001,
External = 0x41000002,
}
}

View File

@@ -0,0 +1,14 @@
using Ryujinx.Horizon.Common;
using Ryujinx.Horizon.Sdk.Sf;
namespace Ryujinx.Horizon.Sdk.Ts
{
interface IMeasurementServer : IServiceObject
{
Result GetTemperatureRange(out int minimumTemperature, out int maximumTemperature, Location location);
Result GetTemperature(out int temperature, Location location);
Result SetMeasurementMode(Location location, byte measurementMode);
Result GetTemperatureMilliC(out int temperatureMilliC, Location location);
Result OpenSession(out ISession session, DeviceCode deviceCode);
}
}

View File

@@ -0,0 +1,12 @@
using Ryujinx.Horizon.Common;
using Ryujinx.Horizon.Sdk.Sf;
namespace Ryujinx.Horizon.Sdk.Ts
{
interface ISession : IServiceObject
{
Result GetTemperatureRange(out int minimumTemperature, out int maximumTemperature);
Result GetTemperature(out int temperature);
Result SetMeasurementMode(byte measurementMode);
}
}

View File

@@ -1,4 +1,4 @@
namespace Ryujinx.HLE.HOS.Services.Ptm.Ts.Types
namespace Ryujinx.Horizon.Sdk.Ts
{
enum Location : byte
{

View File

@@ -11,6 +11,7 @@ using Ryujinx.Horizon.Ngc;
using Ryujinx.Horizon.Ovln;
using Ryujinx.Horizon.Prepo;
using Ryujinx.Horizon.Psc;
using Ryujinx.Horizon.Ptm;
using Ryujinx.Horizon.Sdk.Arp;
using Ryujinx.Horizon.Srepo;
using Ryujinx.Horizon.Usb;
@@ -54,6 +55,7 @@ namespace Ryujinx.Horizon
RegisterService<PrepoMain>();
RegisterService<PscMain>();
RegisterService<SrepoMain>();
RegisterService<TsMain>();
RegisterService<UsbMain>();
RegisterService<WlanMain>();

View File

@@ -3,7 +3,6 @@ using System;
using System.Collections.Generic;
using System.Linq;
using System.Runtime.CompilerServices;
using System.Runtime.InteropServices;
namespace Ryujinx.Memory
{
@@ -11,10 +10,10 @@ namespace Ryujinx.Memory
/// Represents a address space manager.
/// Supports virtual memory region mapping, address translation and read/write access to mapped regions.
/// </summary>
public sealed class AddressSpaceManager : VirtualMemoryManagerBase<ulong, nuint>, IVirtualMemoryManager, IWritableBlock
public sealed class AddressSpaceManager : VirtualMemoryManagerBase, IVirtualMemoryManager
{
/// <inheritdoc/>
public bool Supports4KBPages => true;
public bool UsesPrivateAllocations => false;
/// <summary>
/// Address space width in bits.
@@ -63,8 +62,7 @@ namespace Ryujinx.Memory
}
}
/// <inheritdoc/>
public void MapForeign(ulong va, nuint hostPointer, ulong size)
public override void MapForeign(ulong va, nuint hostPointer, ulong size)
{
AssertValidAddressAndSize(va, size);
@@ -92,106 +90,6 @@ namespace Ryujinx.Memory
}
}
/// <inheritdoc/>
public T Read<T>(ulong va) where T : unmanaged
{
return MemoryMarshal.Cast<byte, T>(GetSpan(va, Unsafe.SizeOf<T>()))[0];
}
/// <inheritdoc/>
public void Write<T>(ulong va, T value) where T : unmanaged
{
Write(va, MemoryMarshal.Cast<T, byte>(MemoryMarshal.CreateSpan(ref value, 1)));
}
/// <inheritdoc/>
public void Write(ulong va, ReadOnlySpan<byte> data)
{
if (data.Length == 0)
{
return;
}
AssertValidAddressAndSize(va, (ulong)data.Length);
if (IsContiguousAndMapped(va, data.Length))
{
data.CopyTo(GetHostSpanContiguous(va, data.Length));
}
else
{
int offset = 0, size;
if ((va & PageMask) != 0)
{
size = Math.Min(data.Length, PageSize - (int)(va & PageMask));
data[..size].CopyTo(GetHostSpanContiguous(va, size));
offset += size;
}
for (; offset < data.Length; offset += size)
{
size = Math.Min(data.Length - offset, PageSize);
data.Slice(offset, size).CopyTo(GetHostSpanContiguous(va + (ulong)offset, size));
}
}
}
/// <inheritdoc/>
public bool WriteWithRedundancyCheck(ulong va, ReadOnlySpan<byte> data)
{
Write(va, data);
return true;
}
/// <inheritdoc/>
public ReadOnlySpan<byte> GetSpan(ulong va, int size, bool tracked = false)
{
if (size == 0)
{
return ReadOnlySpan<byte>.Empty;
}
if (IsContiguousAndMapped(va, size))
{
return GetHostSpanContiguous(va, size);
}
else
{
Span<byte> data = new byte[size];
Read(va, data);
return data;
}
}
/// <inheritdoc/>
public unsafe WritableRegion GetWritableRegion(ulong va, int size, bool tracked = false)
{
if (size == 0)
{
return new WritableRegion(null, va, Memory<byte>.Empty);
}
if (IsContiguousAndMapped(va, size))
{
return new WritableRegion(null, va, new NativeMemoryManager<byte>((byte*)GetHostAddress(va), size).Memory);
}
else
{
Memory<byte> memory = new byte[size];
GetSpan(va, size).CopyTo(memory.Span);
return new WritableRegion(this, va, memory);
}
}
/// <inheritdoc/>
public unsafe ref T GetRef<T>(ulong va) where T : unmanaged
{
@@ -203,50 +101,6 @@ namespace Ryujinx.Memory
return ref *(T*)GetHostAddress(va);
}
/// <inheritdoc/>
[MethodImpl(MethodImplOptions.AggressiveInlining)]
private static int GetPagesCount(ulong va, uint size, out ulong startVa)
{
// WARNING: Always check if ulong does not overflow during the operations.
startVa = va & ~(ulong)PageMask;
ulong vaSpan = (va - startVa + size + PageMask) & ~(ulong)PageMask;
return (int)(vaSpan / PageSize);
}
private static void ThrowMemoryNotContiguous() => throw new MemoryNotContiguousException();
[MethodImpl(MethodImplOptions.AggressiveInlining)]
private bool IsContiguousAndMapped(ulong va, int size) => IsContiguous(va, size) && IsMapped(va);
[MethodImpl(MethodImplOptions.AggressiveInlining)]
private bool IsContiguous(ulong va, int size)
{
if (!ValidateAddress(va) || !ValidateAddressAndSize(va, (ulong)size))
{
return false;
}
int pages = GetPagesCount(va, (uint)size, out va);
for (int page = 0; page < pages - 1; page++)
{
if (!ValidateAddress(va + PageSize))
{
return false;
}
if (GetHostAddress(va) + PageSize != GetHostAddress(va + PageSize))
{
return false;
}
va += PageSize;
}
return true;
}
/// <inheritdoc/>
public IEnumerable<HostMemoryRange> GetHostRegions(ulong va, ulong size)
{
@@ -283,9 +137,9 @@ namespace Ryujinx.Memory
{
var hostRegion = hostRegions[i];
if ((ulong)hostRegion.Address >= backingStart && (ulong)hostRegion.Address < backingEnd)
if (hostRegion.Address >= backingStart && hostRegion.Address < backingEnd)
{
regions[count++] = new MemoryRange((ulong)hostRegion.Address - backingStart, hostRegion.Size);
regions[count++] = new MemoryRange(hostRegion.Address - backingStart, hostRegion.Size);
}
}
@@ -304,7 +158,7 @@ namespace Ryujinx.Memory
return null;
}
int pages = GetPagesCount(va, (uint)size, out va);
int pages = GetPagesCount(va, size, out va);
var regions = new List<HostMemoryRange>();
@@ -336,9 +190,8 @@ namespace Ryujinx.Memory
return regions;
}
/// <inheritdoc/>
[MethodImpl(MethodImplOptions.AggressiveInlining)]
public bool IsMapped(ulong va)
public override bool IsMapped(ulong va)
{
if (!ValidateAddress(va))
{
@@ -351,7 +204,7 @@ namespace Ryujinx.Memory
/// <inheritdoc/>
public bool IsRangeMapped(ulong va, ulong size)
{
if (size == 0UL)
if (size == 0)
{
return true;
}
@@ -376,11 +229,6 @@ namespace Ryujinx.Memory
return true;
}
private unsafe Span<byte> GetHostSpanContiguous(ulong va, int size)
{
return new Span<byte>((void*)GetHostAddress(va), size);
}
private nuint GetHostAddress(ulong va)
{
return _pageTable.Read(va) + (nuint)(va & PageMask);
@@ -392,21 +240,21 @@ namespace Ryujinx.Memory
}
/// <inheritdoc/>
public void TrackingReprotect(ulong va, ulong size, MemoryPermission protection)
public void TrackingReprotect(ulong va, ulong size, MemoryPermission protection, bool guest = false)
{
throw new NotImplementedException();
}
/// <inheritdoc/>
public void SignalMemoryTracking(ulong va, ulong size, bool write, bool precise = false, int? exemptId = null)
{
// Only the ARM Memory Manager has tracking for now.
}
protected unsafe override Memory<byte> GetPhysicalAddressMemory(nuint pa, int size)
=> new NativeMemoryManager<byte>((byte*)pa, size).Memory;
protected override unsafe Span<byte> GetPhysicalAddressSpan(nuint pa, int size)
=> new((void*)pa, size);
=> new Span<byte>((void*)pa, size);
protected override nuint TranslateVirtualAddressForRead(ulong va)
protected override nuint TranslateVirtualAddressChecked(ulong va)
=> GetHostAddress(va);
protected override nuint TranslateVirtualAddressUnchecked(ulong va)
=> GetHostAddress(va);
}
}

View File

@@ -0,0 +1,60 @@
using System;
using System.Buffers;
using System.Runtime.InteropServices;
namespace Ryujinx.Memory
{
/// <summary>
/// A concrete implementation of <seealso cref="ReadOnlySequence{Byte}"/>,
/// with methods to help build a full sequence.
/// </summary>
public sealed class BytesReadOnlySequenceSegment : ReadOnlySequenceSegment<byte>
{
public BytesReadOnlySequenceSegment(Memory<byte> memory) => Memory = memory;
public BytesReadOnlySequenceSegment Append(Memory<byte> memory)
{
var nextSegment = new BytesReadOnlySequenceSegment(memory)
{
RunningIndex = RunningIndex + Memory.Length
};
Next = nextSegment;
return nextSegment;
}
/// <summary>
/// Attempts to determine if the current <seealso cref="Memory{Byte}"/> and <paramref name="other"/> are contiguous.
/// Only works if both were created by a <seealso cref="NativeMemoryManager{Byte}"/>.
/// </summary>
/// <param name="other">The segment to check if continuous with the current one</param>
/// <param name="contiguousStart">The starting address of the contiguous segment</param>
/// <param name="contiguousSize">The size of the contiguous segment</param>
/// <returns>True if the segments are contiguous, otherwise false</returns>
public unsafe bool IsContiguousWith(Memory<byte> other, out nuint contiguousStart, out int contiguousSize)
{
if (MemoryMarshal.TryGetMemoryManager<byte, NativeMemoryManager<byte>>(Memory, out var thisMemoryManager) &&
MemoryMarshal.TryGetMemoryManager<byte, NativeMemoryManager<byte>>(other, out var otherMemoryManager) &&
thisMemoryManager.Pointer + thisMemoryManager.Length == otherMemoryManager.Pointer)
{
contiguousStart = (nuint)thisMemoryManager.Pointer;
contiguousSize = thisMemoryManager.Length + otherMemoryManager.Length;
return true;
}
else
{
contiguousStart = 0;
contiguousSize = 0;
return false;
}
}
/// <summary>
/// Replaces the current <seealso cref="Memory{Byte}"/> value with the one provided.
/// </summary>
/// <param name="memory">The new segment to hold in this <seealso cref="BytesReadOnlySequenceSegment"/></param>
public void Replace(Memory<byte> memory)
=> Memory = memory;
}
}

View File

@@ -8,10 +8,10 @@ namespace Ryujinx.Memory
public interface IVirtualMemoryManager
{
/// <summary>
/// Indicates whenever the memory manager supports aliasing pages at 4KB granularity.
/// Indicates whether the memory manager creates private allocations when the <see cref="MemoryMapFlags.Private"/> flag is set on map.
/// </summary>
/// <returns>True if 4KB pages are supported by the memory manager, false otherwise</returns>
bool Supports4KBPages { get; }
/// <returns>True if private mappings might be used, false otherwise</returns>
bool UsesPrivateAllocations { get; }
/// <summary>
/// Maps a virtual memory range into a physical memory range.
@@ -124,6 +124,16 @@ namespace Ryujinx.Memory
}
}
/// <summary>
/// Gets a read-only sequence of read-only memory blocks from CPU mapped memory.
/// </summary>
/// <param name="va">Virtual address of the data</param>
/// <param name="size">Size of the data</param>
/// <param name="tracked">True if read tracking is triggered on the memory</param>
/// <returns>A read-only sequence of read-only memory of the data</returns>
/// <exception cref="InvalidMemoryRegionException">Throw for unhandled invalid or unmapped memory accesses</exception>
ReadOnlySequence<byte> GetReadOnlySequence(ulong va, int size, bool tracked = false);
/// <summary>
/// Gets a read-only span of data from CPU mapped memory.
/// </summary>
@@ -214,6 +224,7 @@ namespace Ryujinx.Memory
/// <param name="va">Virtual address base</param>
/// <param name="size">Size of the region to protect</param>
/// <param name="protection">Memory protection to set</param>
void TrackingReprotect(ulong va, ulong size, MemoryPermission protection);
/// <param name="guest">True if the protection is for guest access, false otherwise</param>
void TrackingReprotect(ulong va, ulong size, MemoryPermission protection, bool guest);
}
}

View File

@@ -174,7 +174,7 @@ namespace Ryujinx.Memory
/// <param name="offset">Starting offset of the range being read</param>
/// <param name="data">Span where the bytes being read will be copied to</param>
/// <exception cref="ObjectDisposedException">Throw when the memory block has already been disposed</exception>
/// <exception cref="InvalidMemoryRegionException">Throw when the memory region specified for the the data is out of range</exception>
/// <exception cref="InvalidMemoryRegionException">Throw when the memory region specified for the data is out of range</exception>
[MethodImpl(MethodImplOptions.AggressiveInlining)]
public void Read(ulong offset, Span<byte> data)
{
@@ -188,7 +188,7 @@ namespace Ryujinx.Memory
/// <param name="offset">Offset where the data is located</param>
/// <returns>Data at the specified address</returns>
/// <exception cref="ObjectDisposedException">Throw when the memory block has already been disposed</exception>
/// <exception cref="InvalidMemoryRegionException">Throw when the memory region specified for the the data is out of range</exception>
/// <exception cref="InvalidMemoryRegionException">Throw when the memory region specified for the data is out of range</exception>
[MethodImpl(MethodImplOptions.AggressiveInlining)]
public T Read<T>(ulong offset) where T : unmanaged
{
@@ -201,7 +201,7 @@ namespace Ryujinx.Memory
/// <param name="offset">Starting offset of the range being written</param>
/// <param name="data">Span where the bytes being written will be copied from</param>
/// <exception cref="ObjectDisposedException">Throw when the memory block has already been disposed</exception>
/// <exception cref="InvalidMemoryRegionException">Throw when the memory region specified for the the data is out of range</exception>
/// <exception cref="InvalidMemoryRegionException">Throw when the memory region specified for the data is out of range</exception>
[MethodImpl(MethodImplOptions.AggressiveInlining)]
public void Write(ulong offset, ReadOnlySpan<byte> data)
{
@@ -215,7 +215,7 @@ namespace Ryujinx.Memory
/// <param name="offset">Offset to write the data into</param>
/// <param name="data">Data to be written</param>
/// <exception cref="ObjectDisposedException">Throw when the memory block has already been disposed</exception>
/// <exception cref="InvalidMemoryRegionException">Throw when the memory region specified for the the data is out of range</exception>
/// <exception cref="InvalidMemoryRegionException">Throw when the memory region specified for the data is out of range</exception>
[MethodImpl(MethodImplOptions.AggressiveInlining)]
public void Write<T>(ulong offset, T data) where T : unmanaged
{

View File

@@ -14,6 +14,10 @@ namespace Ryujinx.Memory
_length = length;
}
public unsafe T* Pointer => _pointer;
public int Length => _length;
public override Span<T> GetSpan()
{
return new Span<T>((void*)_pointer, _length);

View File

@@ -14,9 +14,14 @@ namespace Ryujinx.Memory.Tracking
// Only use these from within the lock.
private readonly NonOverlappingRangeList<VirtualRegion> _virtualRegions;
// Guest virtual regions are a subset of the normal virtual regions, with potentially different protection
// and expanded area of effect on platforms that don't support misaligned page protection.
private readonly NonOverlappingRangeList<VirtualRegion> _guestVirtualRegions;
private readonly int _pageSize;
private readonly bool _singleByteGuestTracking;
/// <summary>
/// This lock must be obtained when traversing or updating the region-handle hierarchy.
/// It is not required when reading dirty flags.
@@ -27,16 +32,27 @@ namespace Ryujinx.Memory.Tracking
/// Create a new tracking structure for the given "physical" memory block,
/// with a given "virtual" memory manager that will provide mappings and virtual memory protection.
/// </summary>
/// <remarks>
/// If <paramref name="singleByteGuestTracking" /> is true, the memory manager must also support protection on partially
/// unmapped regions without throwing exceptions or dropping protection on the mapped portion.
/// </remarks>
/// <param name="memoryManager">Virtual memory manager</param>
/// <param name="block">Physical memory block</param>
/// <param name="pageSize">Page size of the virtual memory space</param>
public MemoryTracking(IVirtualMemoryManager memoryManager, int pageSize, InvalidAccessHandler invalidAccessHandler = null)
/// <param name="invalidAccessHandler">Method to call for invalid memory accesses</param>
/// <param name="singleByteGuestTracking">True if the guest only signals writes for the first byte</param>
public MemoryTracking(
IVirtualMemoryManager memoryManager,
int pageSize,
InvalidAccessHandler invalidAccessHandler = null,
bool singleByteGuestTracking = false)
{
_memoryManager = memoryManager;
_pageSize = pageSize;
_invalidAccessHandler = invalidAccessHandler;
_singleByteGuestTracking = singleByteGuestTracking;
_virtualRegions = new NonOverlappingRangeList<VirtualRegion>();
_guestVirtualRegions = new NonOverlappingRangeList<VirtualRegion>();
}
private (ulong address, ulong size) PageAlign(ulong address, ulong size)
@@ -62,20 +78,25 @@ namespace Ryujinx.Memory.Tracking
{
ref var overlaps = ref ThreadStaticArray<VirtualRegion>.Get();
int count = _virtualRegions.FindOverlapsNonOverlapping(va, size, ref overlaps);
for (int i = 0; i < count; i++)
for (int type = 0; type < 2; type++)
{
VirtualRegion region = overlaps[i];
NonOverlappingRangeList<VirtualRegion> regions = type == 0 ? _virtualRegions : _guestVirtualRegions;
// If the region has been fully remapped, signal that it has been mapped again.
bool remapped = _memoryManager.IsRangeMapped(region.Address, region.Size);
if (remapped)
int count = regions.FindOverlapsNonOverlapping(va, size, ref overlaps);
for (int i = 0; i < count; i++)
{
region.SignalMappingChanged(true);
}
VirtualRegion region = overlaps[i];
region.UpdateProtection();
// If the region has been fully remapped, signal that it has been mapped again.
bool remapped = _memoryManager.IsRangeMapped(region.Address, region.Size);
if (remapped)
{
region.SignalMappingChanged(true);
}
region.UpdateProtection();
}
}
}
}
@@ -95,27 +116,58 @@ namespace Ryujinx.Memory.Tracking
{
ref var overlaps = ref ThreadStaticArray<VirtualRegion>.Get();
int count = _virtualRegions.FindOverlapsNonOverlapping(va, size, ref overlaps);
for (int i = 0; i < count; i++)
for (int type = 0; type < 2; type++)
{
VirtualRegion region = overlaps[i];
NonOverlappingRangeList<VirtualRegion> regions = type == 0 ? _virtualRegions : _guestVirtualRegions;
region.SignalMappingChanged(false);
int count = regions.FindOverlapsNonOverlapping(va, size, ref overlaps);
for (int i = 0; i < count; i++)
{
VirtualRegion region = overlaps[i];
region.SignalMappingChanged(false);
}
}
}
}
/// <summary>
/// Alter a tracked memory region to properly capture unaligned accesses.
/// For most memory manager modes, this does nothing.
/// </summary>
/// <param name="address">Original region address</param>
/// <param name="size">Original region size</param>
/// <returns>A new address and size for tracking unaligned accesses</returns>
internal (ulong newAddress, ulong newSize) GetUnalignedSafeRegion(ulong address, ulong size)
{
if (_singleByteGuestTracking)
{
// The guest only signals the first byte of each memory access with the current memory manager.
// To catch unaligned access properly, we need to also protect the page before the address.
// Assume that the address and size are already aligned.
return (address - (ulong)_pageSize, size + (ulong)_pageSize);
}
else
{
return (address, size);
}
}
/// <summary>
/// Get a list of virtual regions that a handle covers.
/// </summary>
/// <param name="va">Starting virtual memory address of the handle</param>
/// <param name="size">Size of the handle's memory region</param>
/// <param name="guest">True if getting handles for guest protection, false otherwise</param>
/// <returns>A list of virtual regions within the given range</returns>
internal List<VirtualRegion> GetVirtualRegionsForHandle(ulong va, ulong size)
internal List<VirtualRegion> GetVirtualRegionsForHandle(ulong va, ulong size, bool guest)
{
List<VirtualRegion> result = new();
_virtualRegions.GetOrAddRegions(result, va, size, (va, size) => new VirtualRegion(this, va, size));
NonOverlappingRangeList<VirtualRegion> regions = guest ? _guestVirtualRegions : _virtualRegions;
regions.GetOrAddRegions(result, va, size, (va, size) => new VirtualRegion(this, va, size, guest));
return result;
}
@@ -126,7 +178,14 @@ namespace Ryujinx.Memory.Tracking
/// <param name="region">Region to remove</param>
internal void RemoveVirtual(VirtualRegion region)
{
_virtualRegions.Remove(region);
if (region.Guest)
{
_guestVirtualRegions.Remove(region);
}
else
{
_virtualRegions.Remove(region);
}
}
/// <summary>
@@ -137,10 +196,11 @@ namespace Ryujinx.Memory.Tracking
/// <param name="handles">Handles to inherit state from or reuse. When none are present, provide null</param>
/// <param name="granularity">Desired granularity of write tracking</param>
/// <param name="id">Handle ID</param>
/// <param name="flags">Region flags</param>
/// <returns>The memory tracking handle</returns>
public MultiRegionHandle BeginGranularTracking(ulong address, ulong size, IEnumerable<IRegionHandle> handles, ulong granularity, int id)
public MultiRegionHandle BeginGranularTracking(ulong address, ulong size, IEnumerable<IRegionHandle> handles, ulong granularity, int id, RegionFlags flags = RegionFlags.None)
{
return new MultiRegionHandle(this, address, size, handles, granularity, id);
return new MultiRegionHandle(this, address, size, handles, granularity, id, flags);
}
/// <summary>
@@ -164,15 +224,16 @@ namespace Ryujinx.Memory.Tracking
/// <param name="address">CPU virtual address of the region</param>
/// <param name="size">Size of the region</param>
/// <param name="id">Handle ID</param>
/// <param name="flags">Region flags</param>
/// <returns>The memory tracking handle</returns>
public RegionHandle BeginTracking(ulong address, ulong size, int id)
public RegionHandle BeginTracking(ulong address, ulong size, int id, RegionFlags flags = RegionFlags.None)
{
var (paAddress, paSize) = PageAlign(address, size);
lock (TrackingLock)
{
bool mapped = _memoryManager.IsRangeMapped(address, size);
RegionHandle handle = new(this, paAddress, paSize, address, size, id, mapped);
RegionHandle handle = new(this, paAddress, paSize, address, size, id, flags, mapped);
return handle;
}
@@ -186,15 +247,16 @@ namespace Ryujinx.Memory.Tracking
/// <param name="bitmap">The bitmap owning the dirty flag for this handle</param>
/// <param name="bit">The bit of this handle within the dirty flag</param>
/// <param name="id">Handle ID</param>
/// <param name="flags">Region flags</param>
/// <returns>The memory tracking handle</returns>
internal RegionHandle BeginTrackingBitmap(ulong address, ulong size, ConcurrentBitmap bitmap, int bit, int id)
internal RegionHandle BeginTrackingBitmap(ulong address, ulong size, ConcurrentBitmap bitmap, int bit, int id, RegionFlags flags = RegionFlags.None)
{
var (paAddress, paSize) = PageAlign(address, size);
lock (TrackingLock)
{
bool mapped = _memoryManager.IsRangeMapped(address, size);
RegionHandle handle = new(this, paAddress, paSize, address, size, bitmap, bit, id, mapped);
RegionHandle handle = new(this, paAddress, paSize, address, size, bitmap, bit, id, flags, mapped);
return handle;
}
@@ -202,6 +264,7 @@ namespace Ryujinx.Memory.Tracking
/// <summary>
/// Signal that a virtual memory event happened at the given location.
/// The memory event is assumed to be triggered by guest code.
/// </summary>
/// <param name="address">Virtual address accessed</param>
/// <param name="size">Size of the region affected in bytes</param>
@@ -209,7 +272,7 @@ namespace Ryujinx.Memory.Tracking
/// <returns>True if the event triggered any tracking regions, false otherwise</returns>
public bool VirtualMemoryEvent(ulong address, ulong size, bool write)
{
return VirtualMemoryEvent(address, size, write, precise: false, null);
return VirtualMemoryEvent(address, size, write, precise: false, exemptId: null, guest: true);
}
/// <summary>
@@ -222,8 +285,9 @@ namespace Ryujinx.Memory.Tracking
/// <param name="write">Whether the region was written to or read</param>
/// <param name="precise">True if the access is precise, false otherwise</param>
/// <param name="exemptId">Optional ID that of the handles that should not be signalled</param>
/// <param name="guest">True if the access is from the guest, false otherwise</param>
/// <returns>True if the event triggered any tracking regions, false otherwise</returns>
public bool VirtualMemoryEvent(ulong address, ulong size, bool write, bool precise, int? exemptId = null)
public bool VirtualMemoryEvent(ulong address, ulong size, bool write, bool precise, int? exemptId = null, bool guest = false)
{
// Look up the virtual region using the region list.
// Signal up the chain to relevant handles.
@@ -234,7 +298,9 @@ namespace Ryujinx.Memory.Tracking
{
ref var overlaps = ref ThreadStaticArray<VirtualRegion>.Get();
int count = _virtualRegions.FindOverlapsNonOverlapping(address, size, ref overlaps);
NonOverlappingRangeList<VirtualRegion> regions = guest ? _guestVirtualRegions : _virtualRegions;
int count = regions.FindOverlapsNonOverlapping(address, size, ref overlaps);
if (count == 0 && !precise)
{
@@ -242,7 +308,7 @@ namespace Ryujinx.Memory.Tracking
{
// TODO: There is currently the possibility that a page can be protected after its virtual region is removed.
// This code handles that case when it happens, but it would be better to find out how this happens.
_memoryManager.TrackingReprotect(address & ~(ulong)(_pageSize - 1), (ulong)_pageSize, MemoryPermission.ReadAndWrite);
_memoryManager.TrackingReprotect(address & ~(ulong)(_pageSize - 1), (ulong)_pageSize, MemoryPermission.ReadAndWrite, guest);
return true; // This memory _should_ be mapped, so we need to try again.
}
else
@@ -252,6 +318,12 @@ namespace Ryujinx.Memory.Tracking
}
else
{
if (guest && _singleByteGuestTracking)
{
// Increase the access size to trigger handles with misaligned accesses.
size += (ulong)_pageSize;
}
for (int i = 0; i < count; i++)
{
VirtualRegion region = overlaps[i];
@@ -285,9 +357,10 @@ namespace Ryujinx.Memory.Tracking
/// </summary>
/// <param name="region">Region to reprotect</param>
/// <param name="permission">Memory permission to protect with</param>
internal void ProtectVirtualRegion(VirtualRegion region, MemoryPermission permission)
/// <param name="guest">True if the protection is for guest access, false otherwise</param>
internal void ProtectVirtualRegion(VirtualRegion region, MemoryPermission permission, bool guest)
{
_memoryManager.TrackingReprotect(region.Address, region.Size, permission);
_memoryManager.TrackingReprotect(region.Address, region.Size, permission, guest);
}
/// <summary>

View File

@@ -37,7 +37,8 @@ namespace Ryujinx.Memory.Tracking
ulong size,
IEnumerable<IRegionHandle> handles,
ulong granularity,
int id)
int id,
RegionFlags flags)
{
_handles = new RegionHandle[(size + granularity - 1) / granularity];
Granularity = granularity;
@@ -62,7 +63,7 @@ namespace Ryujinx.Memory.Tracking
// Fill any gap left before this handle.
while (i < startIndex)
{
RegionHandle fillHandle = tracking.BeginTrackingBitmap(address + (ulong)i * granularity, granularity, _dirtyBitmap, i, id);
RegionHandle fillHandle = tracking.BeginTrackingBitmap(address + (ulong)i * granularity, granularity, _dirtyBitmap, i, id, flags);
fillHandle.Parent = this;
_handles[i++] = fillHandle;
}
@@ -83,7 +84,7 @@ namespace Ryujinx.Memory.Tracking
while (i < endIndex)
{
RegionHandle splitHandle = tracking.BeginTrackingBitmap(address + (ulong)i * granularity, granularity, _dirtyBitmap, i, id);
RegionHandle splitHandle = tracking.BeginTrackingBitmap(address + (ulong)i * granularity, granularity, _dirtyBitmap, i, id, flags);
splitHandle.Parent = this;
splitHandle.Reprotect(handle.Dirty);
@@ -106,7 +107,7 @@ namespace Ryujinx.Memory.Tracking
// Fill any remaining space with new handles.
while (i < _handles.Length)
{
RegionHandle handle = tracking.BeginTrackingBitmap(address + (ulong)i * granularity, granularity, _dirtyBitmap, i, id);
RegionHandle handle = tracking.BeginTrackingBitmap(address + (ulong)i * granularity, granularity, _dirtyBitmap, i, id, flags);
handle.Parent = this;
_handles[i++] = handle;
}

View File

@@ -0,0 +1,21 @@
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace Ryujinx.Memory.Tracking
{
[Flags]
public enum RegionFlags
{
None = 0,
/// <summary>
/// Access to the resource is expected to occasionally be unaligned.
/// With some memory managers, guest protection must extend into the previous page to cover unaligned access.
/// If this is not expected, protection is not altered, which can avoid unintended resource dirty/flush.
/// </summary>
UnalignedAccess = 1,
}
}

View File

@@ -55,6 +55,8 @@ namespace Ryujinx.Memory.Tracking
private RegionSignal _preAction; // Action to perform before a read or write. This will block the memory access.
private PreciseRegionSignal _preciseAction; // Action to perform on a precise read or write.
private readonly List<VirtualRegion> _regions;
private readonly List<VirtualRegion> _guestRegions;
private readonly List<VirtualRegion> _allRegions;
private readonly MemoryTracking _tracking;
private bool _disposed;
@@ -99,6 +101,7 @@ namespace Ryujinx.Memory.Tracking
/// <param name="bitmap">The bitmap the dirty flag for this handle is stored in</param>
/// <param name="bit">The bit index representing the dirty flag for this handle</param>
/// <param name="id">Handle ID</param>
/// <param name="flags">Region flags</param>
/// <param name="mapped">True if the region handle starts mapped</param>
internal RegionHandle(
MemoryTracking tracking,
@@ -109,6 +112,7 @@ namespace Ryujinx.Memory.Tracking
ConcurrentBitmap bitmap,
int bit,
int id,
RegionFlags flags,
bool mapped = true)
{
Bitmap = bitmap;
@@ -128,11 +132,12 @@ namespace Ryujinx.Memory.Tracking
RealEndAddress = realAddress + realSize;
_tracking = tracking;
_regions = tracking.GetVirtualRegionsForHandle(address, size);
foreach (var region in _regions)
{
region.Handles.Add(this);
}
_regions = tracking.GetVirtualRegionsForHandle(address, size, false);
_guestRegions = GetGuestRegions(tracking, address, size, flags);
_allRegions = new List<VirtualRegion>(_regions.Count + _guestRegions.Count);
InitializeRegions();
}
/// <summary>
@@ -145,8 +150,9 @@ namespace Ryujinx.Memory.Tracking
/// <param name="realAddress">The real, unaligned address of the handle</param>
/// <param name="realSize">The real, unaligned size of the handle</param>
/// <param name="id">Handle ID</param>
/// <param name="flags">Region flags</param>
/// <param name="mapped">True if the region handle starts mapped</param>
internal RegionHandle(MemoryTracking tracking, ulong address, ulong size, ulong realAddress, ulong realSize, int id, bool mapped = true)
internal RegionHandle(MemoryTracking tracking, ulong address, ulong size, ulong realAddress, ulong realSize, int id, RegionFlags flags, bool mapped = true)
{
Bitmap = new ConcurrentBitmap(1, mapped);
@@ -163,8 +169,37 @@ namespace Ryujinx.Memory.Tracking
RealEndAddress = realAddress + realSize;
_tracking = tracking;
_regions = tracking.GetVirtualRegionsForHandle(address, size);
foreach (var region in _regions)
_regions = tracking.GetVirtualRegionsForHandle(address, size, false);
_guestRegions = GetGuestRegions(tracking, address, size, flags);
_allRegions = new List<VirtualRegion>(_regions.Count + _guestRegions.Count);
InitializeRegions();
}
private List<VirtualRegion> GetGuestRegions(MemoryTracking tracking, ulong address, ulong size, RegionFlags flags)
{
ulong guestAddress;
ulong guestSize;
if (flags.HasFlag(RegionFlags.UnalignedAccess))
{
(guestAddress, guestSize) = tracking.GetUnalignedSafeRegion(address, size);
}
else
{
(guestAddress, guestSize) = (address, size);
}
return tracking.GetVirtualRegionsForHandle(guestAddress, guestSize, true);
}
private void InitializeRegions()
{
_allRegions.AddRange(_regions);
_allRegions.AddRange(_guestRegions);
foreach (var region in _allRegions)
{
region.Handles.Add(this);
}
@@ -321,7 +356,7 @@ namespace Ryujinx.Memory.Tracking
lock (_tracking.TrackingLock)
{
foreach (VirtualRegion region in _regions)
foreach (VirtualRegion region in _allRegions)
{
protectionChanged |= region.UpdateProtection();
}
@@ -379,7 +414,7 @@ namespace Ryujinx.Memory.Tracking
{
lock (_tracking.TrackingLock)
{
foreach (VirtualRegion region in _regions)
foreach (VirtualRegion region in _allRegions)
{
region.UpdateProtection();
}
@@ -414,7 +449,16 @@ namespace Ryujinx.Memory.Tracking
/// <param name="region">Virtual region to add as a child</param>
internal void AddChild(VirtualRegion region)
{
_regions.Add(region);
if (region.Guest)
{
_guestRegions.Add(region);
}
else
{
_regions.Add(region);
}
_allRegions.Add(region);
}
/// <summary>
@@ -469,7 +513,7 @@ namespace Ryujinx.Memory.Tracking
lock (_tracking.TrackingLock)
{
foreach (VirtualRegion region in _regions)
foreach (VirtualRegion region in _allRegions)
{
region.RemoveHandle(this);
}

View File

@@ -13,10 +13,14 @@ namespace Ryujinx.Memory.Tracking
private readonly MemoryTracking _tracking;
private MemoryPermission _lastPermission;
public VirtualRegion(MemoryTracking tracking, ulong address, ulong size, MemoryPermission lastPermission = MemoryPermission.Invalid) : base(address, size)
public bool Guest { get; }
public VirtualRegion(MemoryTracking tracking, ulong address, ulong size, bool guest, MemoryPermission lastPermission = MemoryPermission.Invalid) : base(address, size)
{
_lastPermission = lastPermission;
_tracking = tracking;
Guest = guest;
}
/// <inheritdoc/>
@@ -66,9 +70,12 @@ namespace Ryujinx.Memory.Tracking
{
_lastPermission = MemoryPermission.Invalid;
foreach (RegionHandle handle in Handles)
if (!Guest)
{
handle.SignalMappingChanged(mapped);
foreach (RegionHandle handle in Handles)
{
handle.SignalMappingChanged(mapped);
}
}
}
@@ -103,7 +110,7 @@ namespace Ryujinx.Memory.Tracking
if (_lastPermission != permission)
{
_tracking.ProtectVirtualRegion(this, permission);
_tracking.ProtectVirtualRegion(this, permission, Guest);
_lastPermission = permission;
return true;
@@ -131,7 +138,7 @@ namespace Ryujinx.Memory.Tracking
public override INonOverlappingRange Split(ulong splitAddress)
{
VirtualRegion newRegion = new(_tracking, splitAddress, EndAddress - splitAddress, _lastPermission);
VirtualRegion newRegion = new(_tracking, splitAddress, EndAddress - splitAddress, Guest, _lastPermission);
Size = splitAddress - Address;
// The new region inherits all of our parents.

View File

@@ -1,34 +1,171 @@
using Ryujinx.Common.Memory;
using System;
using System.Numerics;
using System.Buffers;
using System.Runtime.CompilerServices;
using System.Runtime.InteropServices;
namespace Ryujinx.Memory
{
public abstract class VirtualMemoryManagerBase<TVirtual, TPhysical>
where TVirtual : IBinaryInteger<TVirtual>
where TPhysical : IBinaryInteger<TPhysical>
public abstract class VirtualMemoryManagerBase : IWritableBlock
{
public const int PageBits = 12;
public const int PageSize = 1 << PageBits;
public const int PageMask = PageSize - 1;
protected abstract TVirtual AddressSpaceSize { get; }
protected abstract ulong AddressSpaceSize { get; }
public virtual void Read(TVirtual va, Span<byte> data)
public virtual ReadOnlySequence<byte> GetReadOnlySequence(ulong va, int size, bool tracked = false)
{
if (size == 0)
{
return ReadOnlySequence<byte>.Empty;
}
if (tracked)
{
SignalMemoryTracking(va, (ulong)size, false);
}
if (IsContiguousAndMapped(va, size))
{
nuint pa = TranslateVirtualAddressUnchecked(va);
return new ReadOnlySequence<byte>(GetPhysicalAddressMemory(pa, size));
}
else
{
AssertValidAddressAndSize(va, size);
int offset = 0, segmentSize;
BytesReadOnlySequenceSegment first = null, last = null;
if ((va & PageMask) != 0)
{
nuint pa = TranslateVirtualAddressChecked(va);
segmentSize = Math.Min(size, PageSize - (int)(va & PageMask));
Memory<byte> memory = GetPhysicalAddressMemory(pa, segmentSize);
first = last = new BytesReadOnlySequenceSegment(memory);
offset += segmentSize;
}
for (; offset < size; offset += segmentSize)
{
nuint pa = TranslateVirtualAddressChecked(va + (ulong)offset);
segmentSize = Math.Min(size - offset, PageSize);
Memory<byte> memory = GetPhysicalAddressMemory(pa, segmentSize);
if (first is null)
{
first = last = new BytesReadOnlySequenceSegment(memory);
}
else
{
if (last.IsContiguousWith(memory, out nuint contiguousStart, out int contiguousSize))
{
last.Replace(GetPhysicalAddressMemory(contiguousStart, contiguousSize));
}
else
{
last = last.Append(memory);
}
}
}
return new ReadOnlySequence<byte>(first, 0, last, (int)(size - last.RunningIndex));
}
}
public virtual ReadOnlySpan<byte> GetSpan(ulong va, int size, bool tracked = false)
{
if (size == 0)
{
return ReadOnlySpan<byte>.Empty;
}
if (tracked)
{
SignalMemoryTracking(va, (ulong)size, false);
}
if (IsContiguousAndMapped(va, size))
{
nuint pa = TranslateVirtualAddressUnchecked(va);
return GetPhysicalAddressSpan(pa, size);
}
else
{
Span<byte> data = new byte[size];
Read(va, data);
return data;
}
}
public virtual WritableRegion GetWritableRegion(ulong va, int size, bool tracked = false)
{
if (size == 0)
{
return new WritableRegion(null, va, Memory<byte>.Empty);
}
if (tracked)
{
SignalMemoryTracking(va, (ulong)size, true);
}
if (IsContiguousAndMapped(va, size))
{
nuint pa = TranslateVirtualAddressUnchecked(va);
return new WritableRegion(null, va, GetPhysicalAddressMemory(pa, size));
}
else
{
IMemoryOwner<byte> memoryOwner = ByteMemoryPool.Rent(size);
Read(va, memoryOwner.Memory.Span);
return new WritableRegion(this, va, memoryOwner);
}
}
public abstract bool IsMapped(ulong va);
public virtual void MapForeign(ulong va, nuint hostPointer, ulong size)
{
throw new NotSupportedException();
}
public virtual T Read<T>(ulong va) where T : unmanaged
{
return MemoryMarshal.Cast<byte, T>(GetSpan(va, Unsafe.SizeOf<T>()))[0];
}
public virtual void Read(ulong va, Span<byte> data)
{
if (data.Length == 0)
{
return;
}
AssertValidAddressAndSize(va, TVirtual.CreateChecked(data.Length));
AssertValidAddressAndSize(va, data.Length);
int offset = 0, size;
if ((int.CreateTruncating(va) & PageMask) != 0)
if ((va & PageMask) != 0)
{
TPhysical pa = TranslateVirtualAddressForRead(va);
nuint pa = TranslateVirtualAddressChecked(va);
size = Math.Min(data.Length, PageSize - ((int.CreateTruncating(va) & PageMask)));
size = Math.Min(data.Length, PageSize - (int)(va & PageMask));
GetPhysicalAddressSpan(pa, size).CopyTo(data[..size]);
@@ -37,7 +174,7 @@ namespace Ryujinx.Memory
for (; offset < data.Length; offset += size)
{
TPhysical pa = TranslateVirtualAddressForRead(va + TVirtual.CreateChecked(offset));
nuint pa = TranslateVirtualAddressChecked(va + (ulong)offset);
size = Math.Min(data.Length - offset, PageSize);
@@ -45,13 +182,84 @@ namespace Ryujinx.Memory
}
}
public virtual T ReadTracked<T>(ulong va) where T : unmanaged
{
SignalMemoryTracking(va, (ulong)Unsafe.SizeOf<T>(), false);
return Read<T>(va);
}
public virtual void SignalMemoryTracking(ulong va, ulong size, bool write, bool precise = false, int? exemptId = null)
{
// No default implementation
}
public virtual void Write(ulong va, ReadOnlySpan<byte> data)
{
if (data.Length == 0)
{
return;
}
SignalMemoryTracking(va, (ulong)data.Length, true);
WriteImpl(va, data);
}
public virtual void Write<T>(ulong va, T value) where T : unmanaged
{
Write(va, MemoryMarshal.Cast<T, byte>(MemoryMarshal.CreateSpan(ref value, 1)));
}
public virtual void WriteUntracked(ulong va, ReadOnlySpan<byte> data)
{
if (data.Length == 0)
{
return;
}
WriteImpl(va, data);
}
public virtual bool WriteWithRedundancyCheck(ulong va, ReadOnlySpan<byte> data)
{
if (data.Length == 0)
{
return false;
}
if (IsContiguousAndMapped(va, data.Length))
{
SignalMemoryTracking(va, (ulong)data.Length, false);
nuint pa = TranslateVirtualAddressChecked(va);
var target = GetPhysicalAddressSpan(pa, data.Length);
bool changed = !data.SequenceEqual(target);
if (changed)
{
data.CopyTo(target);
}
return changed;
}
else
{
Write(va, data);
return true;
}
}
/// <summary>
/// Ensures the combination of virtual address and size is part of the addressable space.
/// </summary>
/// <param name="va">Virtual address of the range</param>
/// <param name="size">Size of the range in bytes</param>
/// <exception cref="InvalidMemoryRegionException">Throw when the memory region specified outside the addressable space</exception>
protected void AssertValidAddressAndSize(TVirtual va, TVirtual size)
protected void AssertValidAddressAndSize(ulong va, ulong size)
{
if (!ValidateAddressAndSize(va, size))
{
@@ -59,16 +267,82 @@ namespace Ryujinx.Memory
}
}
protected abstract Span<byte> GetPhysicalAddressSpan(TPhysical pa, int size);
/// <summary>
/// Ensures the combination of virtual address and size is part of the addressable space.
/// </summary>
/// <param name="va">Virtual address of the range</param>
/// <param name="size">Size of the range in bytes</param>
/// <exception cref="InvalidMemoryRegionException">Throw when the memory region specified outside the addressable space</exception>
[MethodImpl(MethodImplOptions.AggressiveInlining)]
protected void AssertValidAddressAndSize(ulong va, int size)
=> AssertValidAddressAndSize(va, (ulong)size);
protected abstract TPhysical TranslateVirtualAddressForRead(TVirtual va);
/// <summary>
/// Computes the number of pages in a virtual address range.
/// </summary>
/// <param name="va">Virtual address of the range</param>
/// <param name="size">Size of the range</param>
/// <param name="startVa">The virtual address of the beginning of the first page</param>
/// <remarks>This function does not differentiate between allocated and unallocated pages.</remarks>
[MethodImpl(MethodImplOptions.AggressiveInlining)]
protected static int GetPagesCount(ulong va, ulong size, out ulong startVa)
{
// WARNING: Always check if ulong does not overflow during the operations.
startVa = va & ~(ulong)PageMask;
ulong vaSpan = (va - startVa + size + PageMask) & ~(ulong)PageMask;
return (int)(vaSpan / PageSize);
}
protected abstract Memory<byte> GetPhysicalAddressMemory(nuint pa, int size);
protected abstract Span<byte> GetPhysicalAddressSpan(nuint pa, int size);
[MethodImpl(MethodImplOptions.AggressiveInlining)]
protected bool IsContiguous(ulong va, int size) => IsContiguous(va, (ulong)size);
protected virtual bool IsContiguous(ulong va, ulong size)
{
if (!ValidateAddress(va) || !ValidateAddressAndSize(va, size))
{
return false;
}
int pages = GetPagesCount(va, size, out va);
for (int page = 0; page < pages - 1; page++)
{
if (!ValidateAddress(va + PageSize))
{
return false;
}
if (TranslateVirtualAddressUnchecked(va) + PageSize != TranslateVirtualAddressUnchecked(va + PageSize))
{
return false;
}
va += PageSize;
}
return true;
}
[MethodImpl(MethodImplOptions.AggressiveInlining)]
protected bool IsContiguousAndMapped(ulong va, int size)
=> IsContiguous(va, size) && IsMapped(va);
protected abstract nuint TranslateVirtualAddressChecked(ulong va);
protected abstract nuint TranslateVirtualAddressUnchecked(ulong va);
/// <summary>
/// Checks if the virtual address is part of the addressable space.
/// </summary>
/// <param name="va">Virtual address</param>
/// <returns>True if the virtual address is part of the addressable space</returns>
protected bool ValidateAddress(TVirtual va)
[MethodImpl(MethodImplOptions.AggressiveInlining)]
protected bool ValidateAddress(ulong va)
{
return va < AddressSpaceSize;
}
@@ -79,13 +353,53 @@ namespace Ryujinx.Memory
/// <param name="va">Virtual address of the range</param>
/// <param name="size">Size of the range in bytes</param>
/// <returns>True if the combination of virtual address and size is part of the addressable space</returns>
protected bool ValidateAddressAndSize(TVirtual va, TVirtual size)
protected bool ValidateAddressAndSize(ulong va, ulong size)
{
TVirtual endVa = va + size;
ulong endVa = va + size;
return endVa >= va && endVa >= size && endVa <= AddressSpaceSize;
}
protected static void ThrowInvalidMemoryRegionException(string message)
=> throw new InvalidMemoryRegionException(message);
protected static void ThrowMemoryNotContiguous()
=> throw new MemoryNotContiguousException();
protected virtual void WriteImpl(ulong va, ReadOnlySpan<byte> data)
{
AssertValidAddressAndSize(va, data.Length);
if (IsContiguousAndMapped(va, data.Length))
{
nuint pa = TranslateVirtualAddressUnchecked(va);
data.CopyTo(GetPhysicalAddressSpan(pa, data.Length));
}
else
{
int offset = 0, size;
if ((va & PageMask) != 0)
{
nuint pa = TranslateVirtualAddressChecked(va);
size = Math.Min(data.Length, PageSize - (int)(va & PageMask));
data[..size].CopyTo(GetPhysicalAddressSpan(pa, size));
offset += size;
}
for (; offset < data.Length; offset += size)
{
nuint pa = TranslateVirtualAddressChecked(va + (ulong)offset);
size = Math.Min(data.Length - offset, PageSize);
data.Slice(offset, size).CopyTo(GetPhysicalAddressSpan(pa, size));
}
}
}
}
}

View File

@@ -1,4 +1,5 @@
using System;
using System.Buffers;
namespace Ryujinx.Memory
{
@@ -6,6 +7,7 @@ namespace Ryujinx.Memory
{
private readonly IWritableBlock _block;
private readonly ulong _va;
private readonly IMemoryOwner<byte> _memoryOwner;
private readonly bool _tracked;
private bool NeedsWriteback => _block != null;
@@ -20,6 +22,12 @@ namespace Ryujinx.Memory
Memory = memory;
}
public WritableRegion(IWritableBlock block, ulong va, IMemoryOwner<byte> memoryOwner, bool tracked = false)
: this(block, va, memoryOwner.Memory, tracked)
{
_memoryOwner = memoryOwner;
}
public void Dispose()
{
if (NeedsWriteback)
@@ -33,6 +41,8 @@ namespace Ryujinx.Memory
_block.WriteUntracked(_va, Memory.Span);
}
}
_memoryOwner?.Dispose();
}
}
}

View File

@@ -1,13 +1,14 @@
using Ryujinx.Memory;
using Ryujinx.Memory.Range;
using System;
using System.Buffers;
using System.Collections.Generic;
namespace Ryujinx.Tests.Memory
{
public class MockVirtualMemoryManager : IVirtualMemoryManager
{
public bool Supports4KBPages => true;
public bool UsesPrivateAllocations => false;
public bool NoMappings = false;
@@ -57,6 +58,11 @@ namespace Ryujinx.Tests.Memory
throw new NotImplementedException();
}
public ReadOnlySequence<byte> GetReadOnlySequence(ulong va, int size, bool tracked = false)
{
throw new NotImplementedException();
}
public ReadOnlySpan<byte> GetSpan(ulong va, int size, bool tracked = false)
{
throw new NotImplementedException();
@@ -107,7 +113,7 @@ namespace Ryujinx.Tests.Memory
throw new NotImplementedException();
}
public void TrackingReprotect(ulong va, ulong size, MemoryPermission protection)
public void TrackingReprotect(ulong va, ulong size, MemoryPermission protection, bool guest)
{
OnProtect?.Invoke(va, size, protection);
}

View File

@@ -15,7 +15,7 @@ namespace Ryujinx.UI.Common.Configuration
/// <summary>
/// The current version of the file format
/// </summary>
public const int CurrentVersion = 49;
public const int CurrentVersion = 50;
/// <summary>
/// Version of the configuration file format
@@ -162,6 +162,11 @@ namespace Ryujinx.UI.Common.Configuration
/// </summary>
public bool ShowConfirmExit { get; set; }
/// <summary>
/// Enables hardware-accelerated rendering for Avalonia
/// </summary>
public bool EnableHardwareAcceleration { get; set; }
/// <summary>
/// Whether to hide cursor on idle, always or never
/// </summary>

View File

@@ -626,6 +626,11 @@ namespace Ryujinx.UI.Common.Configuration
/// </summary>
public ReactiveObject<bool> ShowConfirmExit { get; private set; }
/// <summary>
/// Enables hardware-accelerated rendering for Avalonia
/// </summary>
public ReactiveObject<bool> EnableHardwareAcceleration { get; private set; }
/// <summary>
/// Hide Cursor on Idle
/// </summary>
@@ -642,6 +647,7 @@ namespace Ryujinx.UI.Common.Configuration
EnableDiscordIntegration = new ReactiveObject<bool>();
CheckUpdatesOnStart = new ReactiveObject<bool>();
ShowConfirmExit = new ReactiveObject<bool>();
EnableHardwareAcceleration = new ReactiveObject<bool>();
HideCursor = new ReactiveObject<HideCursorMode>();
}
@@ -678,6 +684,7 @@ namespace Ryujinx.UI.Common.Configuration
EnableDiscordIntegration = EnableDiscordIntegration,
CheckUpdatesOnStart = CheckUpdatesOnStart,
ShowConfirmExit = ShowConfirmExit,
EnableHardwareAcceleration = EnableHardwareAcceleration,
HideCursor = HideCursor,
EnableVsync = Graphics.EnableVsync,
EnableShaderCache = Graphics.EnableShaderCache,
@@ -785,6 +792,7 @@ namespace Ryujinx.UI.Common.Configuration
EnableDiscordIntegration.Value = true;
CheckUpdatesOnStart.Value = true;
ShowConfirmExit.Value = true;
EnableHardwareAcceleration.Value = true;
HideCursor.Value = HideCursorMode.OnIdle;
Graphics.EnableVsync.Value = true;
Graphics.EnableShaderCache.Value = true;
@@ -1442,6 +1450,15 @@ namespace Ryujinx.UI.Common.Configuration
configurationFileUpdated = true;
}
if (configurationFileFormat.Version < 50)
{
Ryujinx.Common.Logging.Logger.Warning?.Print(LogClass.Application, $"Outdated configuration version {configurationFileFormat.Version}, migrating to version 50.");
configurationFileFormat.EnableHardwareAcceleration = true;
configurationFileUpdated = true;
}
Logger.EnableFileLog.Value = configurationFileFormat.EnableFileLog;
Graphics.ResScale.Value = configurationFileFormat.ResScale;
Graphics.ResScaleCustom.Value = configurationFileFormat.ResScaleCustom;
@@ -1472,6 +1489,7 @@ namespace Ryujinx.UI.Common.Configuration
EnableDiscordIntegration.Value = configurationFileFormat.EnableDiscordIntegration;
CheckUpdatesOnStart.Value = configurationFileFormat.CheckUpdatesOnStart;
ShowConfirmExit.Value = configurationFileFormat.ShowConfirmExit;
EnableHardwareAcceleration.Value = configurationFileFormat.EnableHardwareAcceleration;
HideCursor.Value = configurationFileFormat.HideCursor;
Graphics.EnableVsync.Value = configurationFileFormat.EnableVsync;
Graphics.EnableShaderCache.Value = configurationFileFormat.EnableShaderCache;

View File

@@ -7,7 +7,7 @@ namespace Ryujinx.UI.Common
public static class DiscordIntegrationModule
{
private const string Description = "A simple, experimental Nintendo Switch emulator.";
private const string CliendId = "568815339807309834";
private const string ApplicationId = "1216775165866807456";
private static DiscordRpcClient _discordClient;
private static RichPresence _discordPresenceMain;
@@ -24,14 +24,14 @@ namespace Ryujinx.UI.Common
Details = "Main Menu",
State = "Idling",
Timestamps = Timestamps.Now,
Buttons = new[]
{
Buttons =
[
new Button
{
Label = "Website",
Url = "https://ryujinx.org/",
Url = "https://ryujinx.org/",
},
},
],
};
ConfigurationState.Instance.EnableDiscordIntegration.Event += Update;
@@ -52,7 +52,7 @@ namespace Ryujinx.UI.Common
// If we need to activate it and the client isn't active, initialize it
if (evnt.NewValue && _discordClient == null)
{
_discordClient = new DiscordRpcClient(CliendId);
_discordClient = new DiscordRpcClient(ApplicationId);
_discordClient.Initialize();
_discordClient.SetPresence(_discordPresenceMain);
@@ -74,14 +74,14 @@ namespace Ryujinx.UI.Common
Details = $"Playing {titleName}",
State = (titleId == "0000000000000000") ? "Homebrew" : titleId.ToUpper(),
Timestamps = Timestamps.Now,
Buttons = new[]
{
Buttons =
[
new Button
{
Label = "Website",
Url = "https://ryujinx.org/",
},
},
],
});
}

View File

@@ -8,6 +8,7 @@ namespace Ryujinx.UI.Common.Helper
public static string[] Arguments { get; private set; }
public static bool? OverrideDockedMode { get; private set; }
public static bool? OverrideHardwareAcceleration { get; private set; }
public static string OverrideGraphicsBackend { get; private set; }
public static string OverrideHideCursor { get; private set; }
public static string BaseDirPathArg { get; private set; }
@@ -87,6 +88,12 @@ namespace Ryujinx.UI.Common.Helper
OverrideHideCursor = args[++i];
break;
case "--software-gui":
OverrideHardwareAcceleration = false;
break;
case "--hardware-gui":
OverrideHardwareAcceleration = true;
break;
default:
LaunchPathArg = arg;
break;

Binary file not shown.

After

Width:  |  Height:  |  Size: 16 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 10 KiB

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 10 KiB

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 10 KiB

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 9.7 KiB

After

Width:  |  Height:  |  Size: 18 KiB

Some files were not shown because too many files have changed in this diff Show More