Jump to content

Scarlet/Violet Trainer picture/icon visualizer


Recommended Posts

Hi everybody. In my free time I decided to do some little research on the format in which trainer photo/icon are saved in S/V.

After some trials and errors, I descovered that each pixel is represented by a sequence of 8 bytes, of which the first 2 represent colors in BGR 565 encoding and the third is some sort of Alpha channel transparency. I'm currently not aware of the purpose of the other bytes, since editing them didn't bring any noticeable change. Then, thanks to this recent commit , I was able to get the correct size and aspect ratio for the images, and I assembled a little python script that I'm attaching to this post.

It is pretty simple to use, just open the save block editor in PkHex, save the image block you desire to see, check the corrisponding values for width - lenght - size (take for reference the previous link until PkHex doesn't get an official update), finally run the script.

For example, if we want to save the current profile picture:

  • export the block at 0x14C5A101 as 'picture.bin'
  • check the width at 0xFEAA87DA, e.g. 1440
  • check the height at 0x5361CEB5, e.g. 832
  • check size at 0x1E505002, e.g. 599040
  • let's save the output as 'current.png'

The code in the terminal, after changing directory to the one in which we find both picture.bin and imgdec.py, will be:

python imgdec.py picture.bin 1440 832 599040 -o current.png

In the future I'll try to understand better the other bytes of the encoding and maybe build a little editor/injector for custom images. Of course you are free to make any change you want, and if you want to share new ideas let me know in the replies!

Alongside the script I also attach some example output images from my saves.






  • Like 3
Link to comment
Share on other sites

Posted (edited)


I still haven't figured out what the other bytes do, but I can confirm that replacing them with 00 padding doesn't create any new problems. So, I edited a bit the script and now you can create your custom .bin for trainer photo/icon with any image you like. It's still unpolished, and it's hardcoded to picture sizes i deem 'safe' (for some reason the actual images may be smaller than the values registered in the dims blocks). Of course the .bin produced must override the corrisponding old block via PkHex, and the other blocks (width, height, size) must be overwritten with values i put in the comments inside the script. Still needs work, but it's getting fun.

Of course, you are still free to make any change you wish and rework the script as you please (like how foohyfooh did and may still do, if he wishes to incorporate the new changes for an eventual injector) 

4 hours ago, foohyfooh said:

I attach an irl photo I took as proof, putting the trainer from PBR as background



Edited by Pako96
  • Like 1
Link to comment
Share on other sites

Obligatory reminder/warning about injecting images being bannable, as player photos are shared for PvP battles / etc.

It is recommended to never inject images that end up being transmitted from your console, because it's a clear indicator of a hacked console.

  • Teary-Eyed 1
  • Ditto 1
Link to comment
Share on other sites


While foohyfooh updates his plugin with the injector, I did some additional research on the images and found out that if you strip each 8 bytes sequence in 4 subsequences of 2 bytes and put them together accordingly you get 4 different masks. Bytes 0-1 of each pixel give the low res image in light mode, bytes 2-3 give low res dark mode, 4-5 and 6-7 give a sort of segmentation of the previous interpretations. I am conducting experiments on how to blend them together. If somebody is interested and skilled in image manipulation, don't hesitate to reply or cantact me.





  • Like 2
Link to comment
Share on other sites

Screenshot from gamescreenshot.thumb.png.cbc3c72e48b8e5545a844389c91b3379.png

 Light and Dark Images

Light and Dark Masks

I haven't gotten it working in the plugin but using GIMP and applying dark mask as a layer mask (darker equals more transparency), I got the resulting image where you can see the blue dot from the machine is showing up which is evident in the game screenshot of what the profile picture should be.  But I still haven't figure out how the light mask is used.
Joined Image

Edited by foohyfooh
  • Like 2
Link to comment
Share on other sites

Appearently computing the mean between the first two images (for each pixel of the 2 images: (pxIM1 + pxIM2) // 2) gives good results. Still figuring out what the last 2 images do. Also, given the values in the resolution blocks, I started to think that it could be a custom algorithm for subsampling. News will follow.




  • Like 1
Link to comment
Share on other sites

Other source material

I truncated the photo block to see how the game directly interprets the various components.

First one is the normal photo

Second one is photo without masks (just bytes 0-1-2-3), and looks pixelated/low res

Third one is photo with only masks (bytes 4-5-6-7), could be a XOR between the two masks





  • Like 1
Link to comment
Share on other sites

Posted (edited)

Single blocks display

First two bytes constitute the basic low res image

Other two bytes for the dark image aren't shown with the first two bytes black, so it's possible that a logic AND is performed between the first two groups of bytes

The two masks perfectly overlap in the image I put in the previous reply, furthermore it's very likely they need to be read as greyscale since they are both black and white. Also they are probably there as alpha channel balancing to make the original image look smoother






I can nearly confirm that light image and dark image have to be alpha blended to make the final image. With this logic is explainable why performing statisthical mean pixel by pixel works, since it represents an alpha blending with alpha fixed at 0.5. Now it needs to be understood how to apply the composite mask for blending. My guess was performing arithmetically the alpha blending pixel by pixel using as alpha the luminance intensity of the corrisponding pixel in the mask. Another way could be computing a fixed alpha as number of white pixels / number ot total pixels in the mask.

Edited by Pako96
Further Development
  • Like 1
Link to comment
Share on other sites

Probably last update

I kept experimenting and I made some acceptable results, even if nothing is to be taken for sure since the true algorithm will never be released to the pubilc.

The original image:



The new experiment:

it was obatained as it follows: save the light image and the dark image, save also the light mask and the dark mask and make a third mask with bitwise pixel by pixel OR of the two original masks (the mask have to be read as grayscale images, normalizing the value of each pixel between 0 - 255). Then for each pixel of the 2 initial images we do alpha blending taking as alpha the corrisponding pixel value of the or mask. So for the 3 channels we'll have the following:

#assuming r,g,b as the three channels and the two images expressed as a list of tuples (r,g,b), where each tuple is a pixel, the lists are named lightIM and darkIM; in addition the mask is a list of values between 0-255 named mask. The final image is an empty list named finalIM

for px in range(len(lightIM)):
  alpha = mask[px]
  newR = int(lightIM[px][0] * (1 - alpha) + darkIM[px][0] * alpha)
  newG = int(lightIM[px][1] * (1 - alpha) + darkIM[px][1] * alpha)
  newB = int(lightIM[px][2] * (1 - alpha) + darkIM[px][2] * alpha)
image = Image.new("RGB", (int(width/4), int(height/4)))

Obtaining these:



The old algebrical pixel by pixel average:



As you can see they look pretty similar, with the new experiment being a little smoother, but maybe the overworld light isn't completely correct.

Anyway I leave this here in order to be reasearched further by whoever wants to give a try or implement it in any form. I'll stop for some time on this topic, but I'll still keep an eye when I'd be able to.


  • Like 1
Link to comment
Share on other sites

  • 4 months later...

Turns out the image was compressed using DXT1 compression. Injecting images should be great now.

Anyways I saved you all the trouble with image stuff and made a Python script and executable with tutorial on how to inject/extract images.

Here's the link. I worked hard on this so I hope someone finds this useful: https://github.com/PizzaTimeJoshua/SV-Image-Injector

  • Like 1
Link to comment
Share on other sites

Whoops, hard coded some sizes for the width and height that apparently change (for reasons unknown) e.g. *UInt32 KPictureProfileCurrentWidth was observed to be either 960 or 1440. Changing the values allows for a higher quality image to be rendered onto the Profile and Icon but also means you need to check/change the values every time you want to inject an image. Did testing on my old save file which had 960 as width but then when I tried it on a newer save file, it has 1440 as a width, which made the image come out incorrectly. I will remedy this issue (or try) as soon as I can.

Damn these inconsistent sizes!

  • Amazed 1
Link to comment
Share on other sites

  • 2 weeks later...

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Create New...