SatComm - Narrative hard sci-fi map tagging game


SatComm (working title) is a narrative hard sci-fi map-reading-and-tagging game where the player looks over real-time, time-delayed, and archival lunar satellite imagery and commands a small robotic rover to survey the land for anything worth studying closer, and uncovering activites the agency may not be aware of.


I wanted to focus on creating a game that played with persistence (the game continues to progress while you’re not playing) in order to demand less of your time. Actions take real-world time to complete, and each play session will often be no more than 10 minutes of reading messages and setting up the next slate of actions.

Update Frequency

I work full-time as a games programmer and work on this project mostly on the spare time I have on weekends while being as careful as I can not to let myself get burned out. So updates are probably not going to be frequent! But I will update this dev log as I make noteable progress!

I’ll post updates in the replies, and edit this first post with updated info and nav links to keep things organized!


Update #1

Right now, the systems I have working include:

  • Google Maps-style Camera: This is a slightly modified version of the EditorCamera from the Cameras module.

  • Persistent Scheduled Events: Setting up scheduled events that save/load for objects in the world to slowly to other locations in the world and for the player to receive email messages. This way the player can setup several things to happen, close the game, go grab a coffee or whatever people do these days, come back and see the results.

  • Objects Have Stats: If the player’s rover scans certain objects in the world, they can see that data on an info window. This is really just going to be for narrative info, like who made it, what’s its COSPAR ID is, how all its subsystems are working, etc etc.

I’m starting to play with updating the art to make everything really look like blurry LRO photography, and even with my beginner knowledge of Blender, I’m hitting the right look really quickly, and honing in on object and world scaling so I can figure out just how enormous I want to make the landscape you can pan around.

Going to do a little more art test stuff, get it hooked into the game and try moving towards hiding stuff in the world for the player to stumble upon and seeing what feels right before I let myself get too nitty-gritty.


This looks really interesting! Looking forward to seeing your progress

1 Like

It’s been fun watching this progress! I played the very early scanning stuff recently and looking forward to a longer form mission of sorts that forces me to wait/come back.

looking good too!

Great idea, and cool aesthetic so far :waning_crescent_moon:

1 Like

Update #2

Last update, I had just started to nail the look of the lunar landscape renders which the player will be panning around for much of the play experience. That was pretty significant milestone and was fairly easy to achieve. I’m still not 100% there, but it was good enough where I figured I should move to the next step: determining how I’d establish a consistent scale for objects and how big I should make the overall map.

The end result I’m going for is to look like blurry aerial photography taken by the Lunar Reconnaissance Orbiter, both to lend an air of authenticity to the depiction and to save me time in how much detail I need to model and texture. It also leaves the true look of things up to the imagination, which I like a lot. So the challenge is to strike a balance where small objects, like the player’s rover, are large enough where you can tell what those objects are but not large enough where you get lots of fine detail.

Determining the size of objects

I started out by doing lots of test renders in Blender with my rover model and a 1-meter cube on a test landscape and adjusting the size of those objects and the size of the final render resolution.

After spending a ton of time iterating and tweaking, waiting, rerendering, then tweaking more, I came to an answer:

1 meter = 9 pixels

On its face, this sounds incredibly low-detail – and it is! – but it hits the look and, realistically, it’s also incredibly high-resolution for an orbiting satellite like this! The Lunar Reconnaissance Orbiter, for example, can get photos with a resolution of up to 0.5 meters per pixel.

Determining the size of the world

So this naturally leads into “how big is the map we’re exploring?” And given the unit of measurement above, Some things are already clear:

> 10 meters = 90 pixels
> 1 kilometer = 9000 pixels

Because I want to be reasonably authentic, my first thought was using a real-world lunar crater or walled basin on the south polar region. But, uhh…

  • Shackleton Crater: 21km diameter = 189,000px = 92x92 2048 tiles
  • Shoemaker Crater: 51km diameter = 459,000px = 224x224 2048 tiles
  • Schrödinger Crater: 312km diameter = 2,808,000px = 1371x1371 2048 tiles

Absolutely no way that kind of scale will work here. Both in just how incredibly vast of spaces those will lead to, but also I don’t know if I want to render and iterate on that size of canvas. So I decided to figure what what world scale does feel right, regardless of real-world parallels.

I started with making a 4x4 grid of 2048x2048 rendered tiles of test terrain, placed it in the game, and panned around to get a feel for it. This felt way too small, so I doubled it to 8x8 2048x2048 tiles. I quickly doubled this to 16x16 tiles, and this is where it felt just right: a huge terrain where it feels like landmarks can have gargantuan scale but not so large I’d either end up with lots of empty space or pressure to make unreasonable amounts of content to fill.

And this turns out to be about 4km x 4km. 16x16 tile grid of 2048x2048 renders.

Prototyping landscape features

My next challenge was prototyping the best method to fill this space with landmarks, features, and objects for people to find. This is naturally going to be an on-going process throughout development, but worth coming up with a method sooner than later.

If I did all my prototyping and iterating in Blender, re-rendering tiles as needed to put new things into the game, that iteration time would be incredibly slow. I want to basically be able to quickly sketch out stuff and then see it in-game immediately to get a real sense of things.

So I compiled all of my rendered tiles into a massive 32,768x32,768 Photoshop file. My computer hesitantly decided to allow this. Now I could draw random shapes for mountains, craters, lava tubes, etc. And with a simple script, I could split all of these back out into 2048x2048 tiles into my game’s project folder, relaunch the game, and see the results.

This works great! Iteration time this way is fast and let’s me try stuff before I commit to actually modeling and sculpting it out.

Current challenges

Okay, so I’ve figured out object and world scales, and it’s giving me a real sense of the size and scope of the game and that feels so, so great! But now I’m left with a huge problem:

My game’s world consists of 256 2048x2048 images all loaded at the same time.

My main computer is fine with this, but if I try to load this on my laptop, which has a pretty average amount of memory, I crash on boot. Of course I do! This is a full dump-truck of image data all at once! There’s currently zero asset/texture streaming going on, but at least I have my assets split up, so I’m all ready to tackle.

But for now, this answered a lot of questions I had with world design, so I’m going to move on to tackling more of the remaining unknowns.

1 Like