Release 1.6.4b


  • Qwen 3.5 related improvements
  • Removed Mistral Small
  • Bug fixes

Files

Silverpine 1.6.4b.zip 280 MB
4 days ago
Silverpine 1.6.4b Linux.zip 283 MB
4 days ago
Silverpine 1.6.4b Demo.zip 280 MB
4 days ago
Silverpine 1.6.4b Linux Demo.zip 283 MB
4 days ago

Get Silverpine

Buy Now$8.00 USD or more

Comments

Log in with itch.io to leave a comment.

Right off the bat I'm getting an error saying "Failed to load AI model: The process has exited/crashed." 

If I select "Try again with automatic GPU offload" it seeeems to work, but I don't know what the difference is.

Thanks for speeding up the text!  Quality of life right there.

What's your GPU? The difference is that having the backend automatically determine the correct amount of layers to offload to the GPU doesn't always work, which can then results in very slow processing.

1. There is a plan to add some free models using open router?

2. Is there a way to change downloaded model?  I downloaded one of models now I don't see a way to change it.

You can change it by checking "Show Local AI Model Selection on Startup" in the settings. It's hidden unless a local model is currently loaded.

There are no plans to support the free APIs on OpenRouter, because they're too unreliable.

(1 edit)

So what happened with Mistral Small that made you want to remove it? Was it getting too buggy or something? Or did you find a better option? Just curious.

Qwen 3.5 27B is simply superior in every way.

Old bug started occurring. Where the NPCs start to have dialog in first person. This has happened about 3 times but isn't that bad. I had it talk for me once. Haven't gotten it to do that again but thought I would mention it. This was on Qwen.