Given small computational resources how navigation was implemented ( Not Samples of old guidance software)How did the Apollo computers evaluate transcendental functions like sine, arctangent, log?How did the Apollo guidance computer handle parity bit errors?How did the Apollo guidance computer handle the Earth-Moon system's rotation around the Sun?Samples of old guidance software using computational resources on Earth implementing navigation in spaceHow often, if ever, was “software” updated in the shuttle orbiter?
opening Illustrator template file (.ait) for editing results in untitled.ai
Right way to say that I disagree with a design but will implement it despite my objections
The falling broom handle
Implementing simplified Jaipur board game
Is 2FA via mobile phone still a good idea when phones are the most exposed device?
Should I correct a mistake on an arXiv manuscript, that I found while refereeing it?
Order of the products of two order-2 elements in a finite group
draw table with tikz matrix line shift a little bit
Left a meeting without apparent reason. What to do?
Does no-one standing against the speaker of the house in UK lead to the local electorate being disenfranchised?
Was this “caterpillar” strategy a good way to advance my pawns?
I can be found near gentle green hills and stony mountains
Moving parentheses vertically
Who originated the dangerous avocado-pitting technique?
Find intersecting polygon within a layer
Can Shadow Pokémon be Shiny in Pokémon Go?
180W Laptop charged with 45W charger, is it dead?
Can I swap out this 20A breaker for a 15A breaker?
How would a young girl/boy (about 14) who never gets old survive in the 16th century?
Help resolve territory acquisition design difference of opinion in MMO RTS
Is it likely that my lack of post-secondary education is holding my resume back?
Why don't my appliances work when my tester shows voltage at the outlets?
How is warfare affected when armor has (temporarily) outpaced guns? How can guns compete?
Differences between vehicles used on the Moon and the ones used on Mars
Given small computational resources how navigation was implemented ( Not Samples of old guidance software)
How did the Apollo computers evaluate transcendental functions like sine, arctangent, log?How did the Apollo guidance computer handle parity bit errors?How did the Apollo guidance computer handle the Earth-Moon system's rotation around the Sun?Samples of old guidance software using computational resources on Earth implementing navigation in spaceHow often, if ever, was “software” updated in the shuttle orbiter?
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty
margin-bottom:0;
.everyonelovesstackoverflowposition:absolute;height:1px;width:1px;opacity:0;top:0;left:0;pointer-events:none;
$begingroup$
Update : What I really wanted to know was how spacecraft navigation ( not guidance ) computers worked, given small computational resources. I have asked in another question and edit this question to limit answers to examples of old guidance software source code, For those interested in samples of old guidance software please refer to Samples of old guidance software ( not with small computational resources implement navigation in space) instead. Leaving the original ( incorrect question below as it was in order to not make the responses look irrelevant).
In an article I came across the something like "X used hardware programme for Venus mission with 65Kb (not sure if this number is correct?) memory".
I am a software developer and with all the resources available today I cannot fathom where one could even start such an endeavour.
Is there an archive (museum) of old/antique software that was written (hard or soft ) for interplanetary missions? if something at a higher level than assembly or the equivalent in today's Java/Pascal/C# etc. programming languages with no consideration for memory and disk usage then that would be even better.
From what little I understood it seems a task equivalent to construction of Pyramids with primitive tools. Are there any simulation or tools to get a today's simpleton programmer a glimpse and appreciation of what those giants did.
navigation software
$endgroup$
|
show 11 more comments
$begingroup$
Update : What I really wanted to know was how spacecraft navigation ( not guidance ) computers worked, given small computational resources. I have asked in another question and edit this question to limit answers to examples of old guidance software source code, For those interested in samples of old guidance software please refer to Samples of old guidance software ( not with small computational resources implement navigation in space) instead. Leaving the original ( incorrect question below as it was in order to not make the responses look irrelevant).
In an article I came across the something like "X used hardware programme for Venus mission with 65Kb (not sure if this number is correct?) memory".
I am a software developer and with all the resources available today I cannot fathom where one could even start such an endeavour.
Is there an archive (museum) of old/antique software that was written (hard or soft ) for interplanetary missions? if something at a higher level than assembly or the equivalent in today's Java/Pascal/C# etc. programming languages with no consideration for memory and disk usage then that would be even better.
From what little I understood it seems a task equivalent to construction of Pyramids with primitive tools. Are there any simulation or tools to get a today's simpleton programmer a glimpse and appreciation of what those giants did.
navigation software
$endgroup$
1
$begingroup$
not an answer, but links in answers to the following questions might be helpful: How did the Apollo computers evaluate transcendental functions like sine, arctangent, log? as well as How did the Apollo guidance computer handle parity bit errors? and also How did the Apollo guidance computer handle the Earth-Moon system's rotation around the Sun?
$endgroup$
– uhoh
Oct 14 at 10:27
1
$begingroup$
The relationship is both 'space probes' and ICBMs need to leave earth first. - "Samples of old guidance software" for a Saturn-V... you will not find.
$endgroup$
– Mazura
Oct 14 at 23:13
3
$begingroup$
@Arjang, orbital-transfer software might not be suitable for ICBMs, but I'm pretty sure the Apollo re-entry guidance software is.
$endgroup$
– Mark
Oct 15 at 6:44
3
$begingroup$
Even something simple like notepad has a whole host of OS compatibility layers, the gui and other libraries it's built on... It's written in a fairly high level language, targeted to the fairly complex x86 platform and has a surprising amount of functionality. When you get down to assembly and optimising for size, you can do incredible things - e.g. theverge.com/2012/5/14/3014698/assembly-4k-demoscene-fractals
$endgroup$
– Baldrickk
Oct 15 at 10:31
7
$begingroup$
As a programmer I strongly encourage you to try microcontroller programming. An Arduino is a great introduction. The basic model has 1k of RAM. You'd be amazed at how much you can achieve with 1k of RAM. People have written everything from quadcopter (drone) controllers, radio-control receivers, walking robot controllers to airplane autopilot guidance/navigation software all in 1k of RAM. I started microcontroller programming with the PIC16F84 which has 68 bytes (yes, bytes, not kilobytes) of RAM and I implemented a lot of projects with it.
$endgroup$
– slebetman
Oct 16 at 4:26
|
show 11 more comments
$begingroup$
Update : What I really wanted to know was how spacecraft navigation ( not guidance ) computers worked, given small computational resources. I have asked in another question and edit this question to limit answers to examples of old guidance software source code, For those interested in samples of old guidance software please refer to Samples of old guidance software ( not with small computational resources implement navigation in space) instead. Leaving the original ( incorrect question below as it was in order to not make the responses look irrelevant).
In an article I came across the something like "X used hardware programme for Venus mission with 65Kb (not sure if this number is correct?) memory".
I am a software developer and with all the resources available today I cannot fathom where one could even start such an endeavour.
Is there an archive (museum) of old/antique software that was written (hard or soft ) for interplanetary missions? if something at a higher level than assembly or the equivalent in today's Java/Pascal/C# etc. programming languages with no consideration for memory and disk usage then that would be even better.
From what little I understood it seems a task equivalent to construction of Pyramids with primitive tools. Are there any simulation or tools to get a today's simpleton programmer a glimpse and appreciation of what those giants did.
navigation software
$endgroup$
Update : What I really wanted to know was how spacecraft navigation ( not guidance ) computers worked, given small computational resources. I have asked in another question and edit this question to limit answers to examples of old guidance software source code, For those interested in samples of old guidance software please refer to Samples of old guidance software ( not with small computational resources implement navigation in space) instead. Leaving the original ( incorrect question below as it was in order to not make the responses look irrelevant).
In an article I came across the something like "X used hardware programme for Venus mission with 65Kb (not sure if this number is correct?) memory".
I am a software developer and with all the resources available today I cannot fathom where one could even start such an endeavour.
Is there an archive (museum) of old/antique software that was written (hard or soft ) for interplanetary missions? if something at a higher level than assembly or the equivalent in today's Java/Pascal/C# etc. programming languages with no consideration for memory and disk usage then that would be even better.
From what little I understood it seems a task equivalent to construction of Pyramids with primitive tools. Are there any simulation or tools to get a today's simpleton programmer a glimpse and appreciation of what those giants did.
navigation software
navigation software
edited Oct 15 at 21:03
Arjang
asked Oct 14 at 10:13
ArjangArjang
1881 silver badge7 bronze badges
1881 silver badge7 bronze badges
1
$begingroup$
not an answer, but links in answers to the following questions might be helpful: How did the Apollo computers evaluate transcendental functions like sine, arctangent, log? as well as How did the Apollo guidance computer handle parity bit errors? and also How did the Apollo guidance computer handle the Earth-Moon system's rotation around the Sun?
$endgroup$
– uhoh
Oct 14 at 10:27
1
$begingroup$
The relationship is both 'space probes' and ICBMs need to leave earth first. - "Samples of old guidance software" for a Saturn-V... you will not find.
$endgroup$
– Mazura
Oct 14 at 23:13
3
$begingroup$
@Arjang, orbital-transfer software might not be suitable for ICBMs, but I'm pretty sure the Apollo re-entry guidance software is.
$endgroup$
– Mark
Oct 15 at 6:44
3
$begingroup$
Even something simple like notepad has a whole host of OS compatibility layers, the gui and other libraries it's built on... It's written in a fairly high level language, targeted to the fairly complex x86 platform and has a surprising amount of functionality. When you get down to assembly and optimising for size, you can do incredible things - e.g. theverge.com/2012/5/14/3014698/assembly-4k-demoscene-fractals
$endgroup$
– Baldrickk
Oct 15 at 10:31
7
$begingroup$
As a programmer I strongly encourage you to try microcontroller programming. An Arduino is a great introduction. The basic model has 1k of RAM. You'd be amazed at how much you can achieve with 1k of RAM. People have written everything from quadcopter (drone) controllers, radio-control receivers, walking robot controllers to airplane autopilot guidance/navigation software all in 1k of RAM. I started microcontroller programming with the PIC16F84 which has 68 bytes (yes, bytes, not kilobytes) of RAM and I implemented a lot of projects with it.
$endgroup$
– slebetman
Oct 16 at 4:26
|
show 11 more comments
1
$begingroup$
not an answer, but links in answers to the following questions might be helpful: How did the Apollo computers evaluate transcendental functions like sine, arctangent, log? as well as How did the Apollo guidance computer handle parity bit errors? and also How did the Apollo guidance computer handle the Earth-Moon system's rotation around the Sun?
$endgroup$
– uhoh
Oct 14 at 10:27
1
$begingroup$
The relationship is both 'space probes' and ICBMs need to leave earth first. - "Samples of old guidance software" for a Saturn-V... you will not find.
$endgroup$
– Mazura
Oct 14 at 23:13
3
$begingroup$
@Arjang, orbital-transfer software might not be suitable for ICBMs, but I'm pretty sure the Apollo re-entry guidance software is.
$endgroup$
– Mark
Oct 15 at 6:44
3
$begingroup$
Even something simple like notepad has a whole host of OS compatibility layers, the gui and other libraries it's built on... It's written in a fairly high level language, targeted to the fairly complex x86 platform and has a surprising amount of functionality. When you get down to assembly and optimising for size, you can do incredible things - e.g. theverge.com/2012/5/14/3014698/assembly-4k-demoscene-fractals
$endgroup$
– Baldrickk
Oct 15 at 10:31
7
$begingroup$
As a programmer I strongly encourage you to try microcontroller programming. An Arduino is a great introduction. The basic model has 1k of RAM. You'd be amazed at how much you can achieve with 1k of RAM. People have written everything from quadcopter (drone) controllers, radio-control receivers, walking robot controllers to airplane autopilot guidance/navigation software all in 1k of RAM. I started microcontroller programming with the PIC16F84 which has 68 bytes (yes, bytes, not kilobytes) of RAM and I implemented a lot of projects with it.
$endgroup$
– slebetman
Oct 16 at 4:26
1
1
$begingroup$
not an answer, but links in answers to the following questions might be helpful: How did the Apollo computers evaluate transcendental functions like sine, arctangent, log? as well as How did the Apollo guidance computer handle parity bit errors? and also How did the Apollo guidance computer handle the Earth-Moon system's rotation around the Sun?
$endgroup$
– uhoh
Oct 14 at 10:27
$begingroup$
not an answer, but links in answers to the following questions might be helpful: How did the Apollo computers evaluate transcendental functions like sine, arctangent, log? as well as How did the Apollo guidance computer handle parity bit errors? and also How did the Apollo guidance computer handle the Earth-Moon system's rotation around the Sun?
$endgroup$
– uhoh
Oct 14 at 10:27
1
1
$begingroup$
The relationship is both 'space probes' and ICBMs need to leave earth first. - "Samples of old guidance software" for a Saturn-V... you will not find.
$endgroup$
– Mazura
Oct 14 at 23:13
$begingroup$
The relationship is both 'space probes' and ICBMs need to leave earth first. - "Samples of old guidance software" for a Saturn-V... you will not find.
$endgroup$
– Mazura
Oct 14 at 23:13
3
3
$begingroup$
@Arjang, orbital-transfer software might not be suitable for ICBMs, but I'm pretty sure the Apollo re-entry guidance software is.
$endgroup$
– Mark
Oct 15 at 6:44
$begingroup$
@Arjang, orbital-transfer software might not be suitable for ICBMs, but I'm pretty sure the Apollo re-entry guidance software is.
$endgroup$
– Mark
Oct 15 at 6:44
3
3
$begingroup$
Even something simple like notepad has a whole host of OS compatibility layers, the gui and other libraries it's built on... It's written in a fairly high level language, targeted to the fairly complex x86 platform and has a surprising amount of functionality. When you get down to assembly and optimising for size, you can do incredible things - e.g. theverge.com/2012/5/14/3014698/assembly-4k-demoscene-fractals
$endgroup$
– Baldrickk
Oct 15 at 10:31
$begingroup$
Even something simple like notepad has a whole host of OS compatibility layers, the gui and other libraries it's built on... It's written in a fairly high level language, targeted to the fairly complex x86 platform and has a surprising amount of functionality. When you get down to assembly and optimising for size, you can do incredible things - e.g. theverge.com/2012/5/14/3014698/assembly-4k-demoscene-fractals
$endgroup$
– Baldrickk
Oct 15 at 10:31
7
7
$begingroup$
As a programmer I strongly encourage you to try microcontroller programming. An Arduino is a great introduction. The basic model has 1k of RAM. You'd be amazed at how much you can achieve with 1k of RAM. People have written everything from quadcopter (drone) controllers, radio-control receivers, walking robot controllers to airplane autopilot guidance/navigation software all in 1k of RAM. I started microcontroller programming with the PIC16F84 which has 68 bytes (yes, bytes, not kilobytes) of RAM and I implemented a lot of projects with it.
$endgroup$
– slebetman
Oct 16 at 4:26
$begingroup$
As a programmer I strongly encourage you to try microcontroller programming. An Arduino is a great introduction. The basic model has 1k of RAM. You'd be amazed at how much you can achieve with 1k of RAM. People have written everything from quadcopter (drone) controllers, radio-control receivers, walking robot controllers to airplane autopilot guidance/navigation software all in 1k of RAM. I started microcontroller programming with the PIC16F84 which has 68 bytes (yes, bytes, not kilobytes) of RAM and I implemented a lot of projects with it.
$endgroup$
– slebetman
Oct 16 at 4:26
|
show 11 more comments
3 Answers
3
active
oldest
votes
$begingroup$
In many of the early probes, up until close to Apollo there were not true computers on space probes. All computing was done on earth and the onboard electronics was known as a sequencer, for Pioneer 10 it had 222 possible commands 5 of which could be readied. Early Venus probes sent data by mechanically switching different sensors to modulate a CW transmitter in turn and sorting it all apart on earth.
This also applied to much of the Apollo launch process, where the hardware in the launch platform did not run true software but a sequence (from here) of 'wait, activate this, wait, measure that and if out of bounds hold else continue'.
Along with the AGC code link by Ludo you can look at the abort controller as a smaller scale example of how things were done (fixed loop of known steps and timing).
Even today it is very rare to send code to a space craft that does not boil down to a sequence of very specific instructions to be run in order. Curiosity has some autonomous navigation and photo taking capability but generally branching code is there to trigger fallback/fail safe 'oops stop, solve antenna pointing problem and call home for instructions' rather than AI or learning code.
In general terms code was made to fit the same way people program for microcontrollers today:
Not having any form of user interface in code (Apollo DSKY was largely hardware)
Using approximation or integer math over floating point (lots of things are possible where pi = 3) or precompute constants on earth and upload when required (say gravity or engine performance)
Custom designing supporting hardware like star trackers to be preloaded with constants from earth and to output pre formatted and bound checked for the next processing step.
In fact, bounds check only once, where data is sourced and ensure no following step can overflow it.
Design algorithms to work in register(s) rather than memory locations (which makes horrible source since you do not have variables) but means you can avoid lots of moving values in and out of memory.
Avoid general problems for the specific, for space craft this was all about navigation, reporting sensor/instrument states and pointing. All of these could have carefully crafted code that worked well over a specific range of inputs (Though see).
Trust your data (in security sense) (though nature can still get you)
$endgroup$
2
$begingroup$
"Curiosity has some autonomous navigation and photo taking capability but generally branching code is there to trigger fallback/fail safe 'oops stop, solve antenna pointing problem and call home for instructions' rather than AI or learning code." Well, Mars Pathfinder (from the 90s) had a real-time operating system (VxWorks) with sufficent complexity of task schedule that it could run into a priority inversion problem. Complex, Complex.
$endgroup$
– David Tonhofer
Oct 16 at 6:06
add a comment
|
$begingroup$
(originally answered to "Samples of old guidance software")
The first that comes to mind is the Github repository of the Apollo 11 Guidance Computer (AGC). The repository has both Command Module and Lunar Module software, but note that it is transcribed from hardcopies, so it might not be fully complete (yet). You can find a simulator of the AGC on the Virtual AGC website (there's a ton of other references there also).
$endgroup$
add a comment
|
$begingroup$
I am a software developer and with all the resources available today I cannot fathom where one could even start such an endeavour.
There are plenty of computer-based systems to this day that have to live with such limitations. There are plenty of embedded systems where 2^16 (65536) bytes of memory remains a luxury. After all, on machines that use 16 bit memory addresses (plenty of which still exist and are plenty of which are still manufactured to this day), there's no point in having over 65636 bytes of memory. And just as there's no problem with a computer with 64 bit addresses having less than 18+ exabytes of memory, there's no problem with a computer that uses 16 bit addresses having less than 2^16 bytes of memory.
There are many ways to start with such an endeavor. The number one rule is to eschew the use of an operating system. Many (most?) embedded systems are bare machines. There's no OS, and there's only one program running, ever. Your microwave oven has a computer operating as an embedded system, and it has no operating system. If your car was manufactured in the last 25+ years, it has lots of embedded systems running in it. If your car is anywhere close to modern, it has several dozens of microcontrollers that collectively run several million lines of code.
Many of the microcontrollers in a modern car are not subject to the 64K (2^16, or 65536) address limit. Back in the day, that was a very common limit, and it inherently limited the size of memory. But it did not limit storage. The problem of having disk size exceed address limitations was solved in the 1950s and 1960s. A common solution was to use memory overlays. This technique, one I'm glad to have (mostly) forgotten about, remains common to this day in embedded systems programming.
Another widely used technique was and is to have the embedded machine follow a Harvard architecture as opposed to a von Neumann architecture. There is no distinction between code and data in a Von Neumann machine. Code and data are very different things in a Harvard architecture machine, possibly with different word sizes. Your laptop or desktop machine most likely is a von Neumann architecture machine, at least on the surface. Deep under the hood it looks more like a Harvard machine, with separate caches for code and data.
$endgroup$
$begingroup$
Yes, I should have said application developer on PCs with (almost) unlimited amounts of everything, forgot about the embedded systems and all their challenges for a extremely limited memory and storage. Shouldn't be throwing around "I am a software developer" so lightly.
$endgroup$
– Arjang
Oct 15 at 21:08
1
$begingroup$
When I was a physics student the HP-25 came out; it could hold 49 program steps. I programmed it to act as a lunar lander, where the user entered burn duration and thrust. All the physical constants, lander mass, fuel mass, initial velocity, and altitude were correct. I ignored attitude / vector control, but the point is that it was 49 steps and correct. And it was damn hard to land. Neil Armstrong was one hell of a pilot!
$endgroup$
– andy256
Oct 16 at 9:44
add a comment
|
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "508"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/4.0/"u003ecc by-sa 4.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fspace.stackexchange.com%2fquestions%2f39336%2fgiven-small-computational-resources-how-navigation-was-implemented-not-samples%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
3 Answers
3
active
oldest
votes
3 Answers
3
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
In many of the early probes, up until close to Apollo there were not true computers on space probes. All computing was done on earth and the onboard electronics was known as a sequencer, for Pioneer 10 it had 222 possible commands 5 of which could be readied. Early Venus probes sent data by mechanically switching different sensors to modulate a CW transmitter in turn and sorting it all apart on earth.
This also applied to much of the Apollo launch process, where the hardware in the launch platform did not run true software but a sequence (from here) of 'wait, activate this, wait, measure that and if out of bounds hold else continue'.
Along with the AGC code link by Ludo you can look at the abort controller as a smaller scale example of how things were done (fixed loop of known steps and timing).
Even today it is very rare to send code to a space craft that does not boil down to a sequence of very specific instructions to be run in order. Curiosity has some autonomous navigation and photo taking capability but generally branching code is there to trigger fallback/fail safe 'oops stop, solve antenna pointing problem and call home for instructions' rather than AI or learning code.
In general terms code was made to fit the same way people program for microcontrollers today:
Not having any form of user interface in code (Apollo DSKY was largely hardware)
Using approximation or integer math over floating point (lots of things are possible where pi = 3) or precompute constants on earth and upload when required (say gravity or engine performance)
Custom designing supporting hardware like star trackers to be preloaded with constants from earth and to output pre formatted and bound checked for the next processing step.
In fact, bounds check only once, where data is sourced and ensure no following step can overflow it.
Design algorithms to work in register(s) rather than memory locations (which makes horrible source since you do not have variables) but means you can avoid lots of moving values in and out of memory.
Avoid general problems for the specific, for space craft this was all about navigation, reporting sensor/instrument states and pointing. All of these could have carefully crafted code that worked well over a specific range of inputs (Though see).
Trust your data (in security sense) (though nature can still get you)
$endgroup$
2
$begingroup$
"Curiosity has some autonomous navigation and photo taking capability but generally branching code is there to trigger fallback/fail safe 'oops stop, solve antenna pointing problem and call home for instructions' rather than AI or learning code." Well, Mars Pathfinder (from the 90s) had a real-time operating system (VxWorks) with sufficent complexity of task schedule that it could run into a priority inversion problem. Complex, Complex.
$endgroup$
– David Tonhofer
Oct 16 at 6:06
add a comment
|
$begingroup$
In many of the early probes, up until close to Apollo there were not true computers on space probes. All computing was done on earth and the onboard electronics was known as a sequencer, for Pioneer 10 it had 222 possible commands 5 of which could be readied. Early Venus probes sent data by mechanically switching different sensors to modulate a CW transmitter in turn and sorting it all apart on earth.
This also applied to much of the Apollo launch process, where the hardware in the launch platform did not run true software but a sequence (from here) of 'wait, activate this, wait, measure that and if out of bounds hold else continue'.
Along with the AGC code link by Ludo you can look at the abort controller as a smaller scale example of how things were done (fixed loop of known steps and timing).
Even today it is very rare to send code to a space craft that does not boil down to a sequence of very specific instructions to be run in order. Curiosity has some autonomous navigation and photo taking capability but generally branching code is there to trigger fallback/fail safe 'oops stop, solve antenna pointing problem and call home for instructions' rather than AI or learning code.
In general terms code was made to fit the same way people program for microcontrollers today:
Not having any form of user interface in code (Apollo DSKY was largely hardware)
Using approximation or integer math over floating point (lots of things are possible where pi = 3) or precompute constants on earth and upload when required (say gravity or engine performance)
Custom designing supporting hardware like star trackers to be preloaded with constants from earth and to output pre formatted and bound checked for the next processing step.
In fact, bounds check only once, where data is sourced and ensure no following step can overflow it.
Design algorithms to work in register(s) rather than memory locations (which makes horrible source since you do not have variables) but means you can avoid lots of moving values in and out of memory.
Avoid general problems for the specific, for space craft this was all about navigation, reporting sensor/instrument states and pointing. All of these could have carefully crafted code that worked well over a specific range of inputs (Though see).
Trust your data (in security sense) (though nature can still get you)
$endgroup$
2
$begingroup$
"Curiosity has some autonomous navigation and photo taking capability but generally branching code is there to trigger fallback/fail safe 'oops stop, solve antenna pointing problem and call home for instructions' rather than AI or learning code." Well, Mars Pathfinder (from the 90s) had a real-time operating system (VxWorks) with sufficent complexity of task schedule that it could run into a priority inversion problem. Complex, Complex.
$endgroup$
– David Tonhofer
Oct 16 at 6:06
add a comment
|
$begingroup$
In many of the early probes, up until close to Apollo there were not true computers on space probes. All computing was done on earth and the onboard electronics was known as a sequencer, for Pioneer 10 it had 222 possible commands 5 of which could be readied. Early Venus probes sent data by mechanically switching different sensors to modulate a CW transmitter in turn and sorting it all apart on earth.
This also applied to much of the Apollo launch process, where the hardware in the launch platform did not run true software but a sequence (from here) of 'wait, activate this, wait, measure that and if out of bounds hold else continue'.
Along with the AGC code link by Ludo you can look at the abort controller as a smaller scale example of how things were done (fixed loop of known steps and timing).
Even today it is very rare to send code to a space craft that does not boil down to a sequence of very specific instructions to be run in order. Curiosity has some autonomous navigation and photo taking capability but generally branching code is there to trigger fallback/fail safe 'oops stop, solve antenna pointing problem and call home for instructions' rather than AI or learning code.
In general terms code was made to fit the same way people program for microcontrollers today:
Not having any form of user interface in code (Apollo DSKY was largely hardware)
Using approximation or integer math over floating point (lots of things are possible where pi = 3) or precompute constants on earth and upload when required (say gravity or engine performance)
Custom designing supporting hardware like star trackers to be preloaded with constants from earth and to output pre formatted and bound checked for the next processing step.
In fact, bounds check only once, where data is sourced and ensure no following step can overflow it.
Design algorithms to work in register(s) rather than memory locations (which makes horrible source since you do not have variables) but means you can avoid lots of moving values in and out of memory.
Avoid general problems for the specific, for space craft this was all about navigation, reporting sensor/instrument states and pointing. All of these could have carefully crafted code that worked well over a specific range of inputs (Though see).
Trust your data (in security sense) (though nature can still get you)
$endgroup$
In many of the early probes, up until close to Apollo there were not true computers on space probes. All computing was done on earth and the onboard electronics was known as a sequencer, for Pioneer 10 it had 222 possible commands 5 of which could be readied. Early Venus probes sent data by mechanically switching different sensors to modulate a CW transmitter in turn and sorting it all apart on earth.
This also applied to much of the Apollo launch process, where the hardware in the launch platform did not run true software but a sequence (from here) of 'wait, activate this, wait, measure that and if out of bounds hold else continue'.
Along with the AGC code link by Ludo you can look at the abort controller as a smaller scale example of how things were done (fixed loop of known steps and timing).
Even today it is very rare to send code to a space craft that does not boil down to a sequence of very specific instructions to be run in order. Curiosity has some autonomous navigation and photo taking capability but generally branching code is there to trigger fallback/fail safe 'oops stop, solve antenna pointing problem and call home for instructions' rather than AI or learning code.
In general terms code was made to fit the same way people program for microcontrollers today:
Not having any form of user interface in code (Apollo DSKY was largely hardware)
Using approximation or integer math over floating point (lots of things are possible where pi = 3) or precompute constants on earth and upload when required (say gravity or engine performance)
Custom designing supporting hardware like star trackers to be preloaded with constants from earth and to output pre formatted and bound checked for the next processing step.
In fact, bounds check only once, where data is sourced and ensure no following step can overflow it.
Design algorithms to work in register(s) rather than memory locations (which makes horrible source since you do not have variables) but means you can avoid lots of moving values in and out of memory.
Avoid general problems for the specific, for space craft this was all about navigation, reporting sensor/instrument states and pointing. All of these could have carefully crafted code that worked well over a specific range of inputs (Though see).
Trust your data (in security sense) (though nature can still get you)
answered Oct 14 at 11:54
GremlinWrangerGremlinWranger
5,3278 silver badges27 bronze badges
5,3278 silver badges27 bronze badges
2
$begingroup$
"Curiosity has some autonomous navigation and photo taking capability but generally branching code is there to trigger fallback/fail safe 'oops stop, solve antenna pointing problem and call home for instructions' rather than AI or learning code." Well, Mars Pathfinder (from the 90s) had a real-time operating system (VxWorks) with sufficent complexity of task schedule that it could run into a priority inversion problem. Complex, Complex.
$endgroup$
– David Tonhofer
Oct 16 at 6:06
add a comment
|
2
$begingroup$
"Curiosity has some autonomous navigation and photo taking capability but generally branching code is there to trigger fallback/fail safe 'oops stop, solve antenna pointing problem and call home for instructions' rather than AI or learning code." Well, Mars Pathfinder (from the 90s) had a real-time operating system (VxWorks) with sufficent complexity of task schedule that it could run into a priority inversion problem. Complex, Complex.
$endgroup$
– David Tonhofer
Oct 16 at 6:06
2
2
$begingroup$
"Curiosity has some autonomous navigation and photo taking capability but generally branching code is there to trigger fallback/fail safe 'oops stop, solve antenna pointing problem and call home for instructions' rather than AI or learning code." Well, Mars Pathfinder (from the 90s) had a real-time operating system (VxWorks) with sufficent complexity of task schedule that it could run into a priority inversion problem. Complex, Complex.
$endgroup$
– David Tonhofer
Oct 16 at 6:06
$begingroup$
"Curiosity has some autonomous navigation and photo taking capability but generally branching code is there to trigger fallback/fail safe 'oops stop, solve antenna pointing problem and call home for instructions' rather than AI or learning code." Well, Mars Pathfinder (from the 90s) had a real-time operating system (VxWorks) with sufficent complexity of task schedule that it could run into a priority inversion problem. Complex, Complex.
$endgroup$
– David Tonhofer
Oct 16 at 6:06
add a comment
|
$begingroup$
(originally answered to "Samples of old guidance software")
The first that comes to mind is the Github repository of the Apollo 11 Guidance Computer (AGC). The repository has both Command Module and Lunar Module software, but note that it is transcribed from hardcopies, so it might not be fully complete (yet). You can find a simulator of the AGC on the Virtual AGC website (there's a ton of other references there also).
$endgroup$
add a comment
|
$begingroup$
(originally answered to "Samples of old guidance software")
The first that comes to mind is the Github repository of the Apollo 11 Guidance Computer (AGC). The repository has both Command Module and Lunar Module software, but note that it is transcribed from hardcopies, so it might not be fully complete (yet). You can find a simulator of the AGC on the Virtual AGC website (there's a ton of other references there also).
$endgroup$
add a comment
|
$begingroup$
(originally answered to "Samples of old guidance software")
The first that comes to mind is the Github repository of the Apollo 11 Guidance Computer (AGC). The repository has both Command Module and Lunar Module software, but note that it is transcribed from hardcopies, so it might not be fully complete (yet). You can find a simulator of the AGC on the Virtual AGC website (there's a ton of other references there also).
$endgroup$
(originally answered to "Samples of old guidance software")
The first that comes to mind is the Github repository of the Apollo 11 Guidance Computer (AGC). The repository has both Command Module and Lunar Module software, but note that it is transcribed from hardcopies, so it might not be fully complete (yet). You can find a simulator of the AGC on the Virtual AGC website (there's a ton of other references there also).
edited Oct 18 at 9:48
answered Oct 14 at 10:53
LudoLudo
1,9696 silver badges30 bronze badges
1,9696 silver badges30 bronze badges
add a comment
|
add a comment
|
$begingroup$
I am a software developer and with all the resources available today I cannot fathom where one could even start such an endeavour.
There are plenty of computer-based systems to this day that have to live with such limitations. There are plenty of embedded systems where 2^16 (65536) bytes of memory remains a luxury. After all, on machines that use 16 bit memory addresses (plenty of which still exist and are plenty of which are still manufactured to this day), there's no point in having over 65636 bytes of memory. And just as there's no problem with a computer with 64 bit addresses having less than 18+ exabytes of memory, there's no problem with a computer that uses 16 bit addresses having less than 2^16 bytes of memory.
There are many ways to start with such an endeavor. The number one rule is to eschew the use of an operating system. Many (most?) embedded systems are bare machines. There's no OS, and there's only one program running, ever. Your microwave oven has a computer operating as an embedded system, and it has no operating system. If your car was manufactured in the last 25+ years, it has lots of embedded systems running in it. If your car is anywhere close to modern, it has several dozens of microcontrollers that collectively run several million lines of code.
Many of the microcontrollers in a modern car are not subject to the 64K (2^16, or 65536) address limit. Back in the day, that was a very common limit, and it inherently limited the size of memory. But it did not limit storage. The problem of having disk size exceed address limitations was solved in the 1950s and 1960s. A common solution was to use memory overlays. This technique, one I'm glad to have (mostly) forgotten about, remains common to this day in embedded systems programming.
Another widely used technique was and is to have the embedded machine follow a Harvard architecture as opposed to a von Neumann architecture. There is no distinction between code and data in a Von Neumann machine. Code and data are very different things in a Harvard architecture machine, possibly with different word sizes. Your laptop or desktop machine most likely is a von Neumann architecture machine, at least on the surface. Deep under the hood it looks more like a Harvard machine, with separate caches for code and data.
$endgroup$
$begingroup$
Yes, I should have said application developer on PCs with (almost) unlimited amounts of everything, forgot about the embedded systems and all their challenges for a extremely limited memory and storage. Shouldn't be throwing around "I am a software developer" so lightly.
$endgroup$
– Arjang
Oct 15 at 21:08
1
$begingroup$
When I was a physics student the HP-25 came out; it could hold 49 program steps. I programmed it to act as a lunar lander, where the user entered burn duration and thrust. All the physical constants, lander mass, fuel mass, initial velocity, and altitude were correct. I ignored attitude / vector control, but the point is that it was 49 steps and correct. And it was damn hard to land. Neil Armstrong was one hell of a pilot!
$endgroup$
– andy256
Oct 16 at 9:44
add a comment
|
$begingroup$
I am a software developer and with all the resources available today I cannot fathom where one could even start such an endeavour.
There are plenty of computer-based systems to this day that have to live with such limitations. There are plenty of embedded systems where 2^16 (65536) bytes of memory remains a luxury. After all, on machines that use 16 bit memory addresses (plenty of which still exist and are plenty of which are still manufactured to this day), there's no point in having over 65636 bytes of memory. And just as there's no problem with a computer with 64 bit addresses having less than 18+ exabytes of memory, there's no problem with a computer that uses 16 bit addresses having less than 2^16 bytes of memory.
There are many ways to start with such an endeavor. The number one rule is to eschew the use of an operating system. Many (most?) embedded systems are bare machines. There's no OS, and there's only one program running, ever. Your microwave oven has a computer operating as an embedded system, and it has no operating system. If your car was manufactured in the last 25+ years, it has lots of embedded systems running in it. If your car is anywhere close to modern, it has several dozens of microcontrollers that collectively run several million lines of code.
Many of the microcontrollers in a modern car are not subject to the 64K (2^16, or 65536) address limit. Back in the day, that was a very common limit, and it inherently limited the size of memory. But it did not limit storage. The problem of having disk size exceed address limitations was solved in the 1950s and 1960s. A common solution was to use memory overlays. This technique, one I'm glad to have (mostly) forgotten about, remains common to this day in embedded systems programming.
Another widely used technique was and is to have the embedded machine follow a Harvard architecture as opposed to a von Neumann architecture. There is no distinction between code and data in a Von Neumann machine. Code and data are very different things in a Harvard architecture machine, possibly with different word sizes. Your laptop or desktop machine most likely is a von Neumann architecture machine, at least on the surface. Deep under the hood it looks more like a Harvard machine, with separate caches for code and data.
$endgroup$
$begingroup$
Yes, I should have said application developer on PCs with (almost) unlimited amounts of everything, forgot about the embedded systems and all their challenges for a extremely limited memory and storage. Shouldn't be throwing around "I am a software developer" so lightly.
$endgroup$
– Arjang
Oct 15 at 21:08
1
$begingroup$
When I was a physics student the HP-25 came out; it could hold 49 program steps. I programmed it to act as a lunar lander, where the user entered burn duration and thrust. All the physical constants, lander mass, fuel mass, initial velocity, and altitude were correct. I ignored attitude / vector control, but the point is that it was 49 steps and correct. And it was damn hard to land. Neil Armstrong was one hell of a pilot!
$endgroup$
– andy256
Oct 16 at 9:44
add a comment
|
$begingroup$
I am a software developer and with all the resources available today I cannot fathom where one could even start such an endeavour.
There are plenty of computer-based systems to this day that have to live with such limitations. There are plenty of embedded systems where 2^16 (65536) bytes of memory remains a luxury. After all, on machines that use 16 bit memory addresses (plenty of which still exist and are plenty of which are still manufactured to this day), there's no point in having over 65636 bytes of memory. And just as there's no problem with a computer with 64 bit addresses having less than 18+ exabytes of memory, there's no problem with a computer that uses 16 bit addresses having less than 2^16 bytes of memory.
There are many ways to start with such an endeavor. The number one rule is to eschew the use of an operating system. Many (most?) embedded systems are bare machines. There's no OS, and there's only one program running, ever. Your microwave oven has a computer operating as an embedded system, and it has no operating system. If your car was manufactured in the last 25+ years, it has lots of embedded systems running in it. If your car is anywhere close to modern, it has several dozens of microcontrollers that collectively run several million lines of code.
Many of the microcontrollers in a modern car are not subject to the 64K (2^16, or 65536) address limit. Back in the day, that was a very common limit, and it inherently limited the size of memory. But it did not limit storage. The problem of having disk size exceed address limitations was solved in the 1950s and 1960s. A common solution was to use memory overlays. This technique, one I'm glad to have (mostly) forgotten about, remains common to this day in embedded systems programming.
Another widely used technique was and is to have the embedded machine follow a Harvard architecture as opposed to a von Neumann architecture. There is no distinction between code and data in a Von Neumann machine. Code and data are very different things in a Harvard architecture machine, possibly with different word sizes. Your laptop or desktop machine most likely is a von Neumann architecture machine, at least on the surface. Deep under the hood it looks more like a Harvard machine, with separate caches for code and data.
$endgroup$
I am a software developer and with all the resources available today I cannot fathom where one could even start such an endeavour.
There are plenty of computer-based systems to this day that have to live with such limitations. There are plenty of embedded systems where 2^16 (65536) bytes of memory remains a luxury. After all, on machines that use 16 bit memory addresses (plenty of which still exist and are plenty of which are still manufactured to this day), there's no point in having over 65636 bytes of memory. And just as there's no problem with a computer with 64 bit addresses having less than 18+ exabytes of memory, there's no problem with a computer that uses 16 bit addresses having less than 2^16 bytes of memory.
There are many ways to start with such an endeavor. The number one rule is to eschew the use of an operating system. Many (most?) embedded systems are bare machines. There's no OS, and there's only one program running, ever. Your microwave oven has a computer operating as an embedded system, and it has no operating system. If your car was manufactured in the last 25+ years, it has lots of embedded systems running in it. If your car is anywhere close to modern, it has several dozens of microcontrollers that collectively run several million lines of code.
Many of the microcontrollers in a modern car are not subject to the 64K (2^16, or 65536) address limit. Back in the day, that was a very common limit, and it inherently limited the size of memory. But it did not limit storage. The problem of having disk size exceed address limitations was solved in the 1950s and 1960s. A common solution was to use memory overlays. This technique, one I'm glad to have (mostly) forgotten about, remains common to this day in embedded systems programming.
Another widely used technique was and is to have the embedded machine follow a Harvard architecture as opposed to a von Neumann architecture. There is no distinction between code and data in a Von Neumann machine. Code and data are very different things in a Harvard architecture machine, possibly with different word sizes. Your laptop or desktop machine most likely is a von Neumann architecture machine, at least on the surface. Deep under the hood it looks more like a Harvard machine, with separate caches for code and data.
answered Oct 15 at 5:55
David HammenDavid Hammen
35.6k3 gold badges84 silver badges155 bronze badges
35.6k3 gold badges84 silver badges155 bronze badges
$begingroup$
Yes, I should have said application developer on PCs with (almost) unlimited amounts of everything, forgot about the embedded systems and all their challenges for a extremely limited memory and storage. Shouldn't be throwing around "I am a software developer" so lightly.
$endgroup$
– Arjang
Oct 15 at 21:08
1
$begingroup$
When I was a physics student the HP-25 came out; it could hold 49 program steps. I programmed it to act as a lunar lander, where the user entered burn duration and thrust. All the physical constants, lander mass, fuel mass, initial velocity, and altitude were correct. I ignored attitude / vector control, but the point is that it was 49 steps and correct. And it was damn hard to land. Neil Armstrong was one hell of a pilot!
$endgroup$
– andy256
Oct 16 at 9:44
add a comment
|
$begingroup$
Yes, I should have said application developer on PCs with (almost) unlimited amounts of everything, forgot about the embedded systems and all their challenges for a extremely limited memory and storage. Shouldn't be throwing around "I am a software developer" so lightly.
$endgroup$
– Arjang
Oct 15 at 21:08
1
$begingroup$
When I was a physics student the HP-25 came out; it could hold 49 program steps. I programmed it to act as a lunar lander, where the user entered burn duration and thrust. All the physical constants, lander mass, fuel mass, initial velocity, and altitude were correct. I ignored attitude / vector control, but the point is that it was 49 steps and correct. And it was damn hard to land. Neil Armstrong was one hell of a pilot!
$endgroup$
– andy256
Oct 16 at 9:44
$begingroup$
Yes, I should have said application developer on PCs with (almost) unlimited amounts of everything, forgot about the embedded systems and all their challenges for a extremely limited memory and storage. Shouldn't be throwing around "I am a software developer" so lightly.
$endgroup$
– Arjang
Oct 15 at 21:08
$begingroup$
Yes, I should have said application developer on PCs with (almost) unlimited amounts of everything, forgot about the embedded systems and all their challenges for a extremely limited memory and storage. Shouldn't be throwing around "I am a software developer" so lightly.
$endgroup$
– Arjang
Oct 15 at 21:08
1
1
$begingroup$
When I was a physics student the HP-25 came out; it could hold 49 program steps. I programmed it to act as a lunar lander, where the user entered burn duration and thrust. All the physical constants, lander mass, fuel mass, initial velocity, and altitude were correct. I ignored attitude / vector control, but the point is that it was 49 steps and correct. And it was damn hard to land. Neil Armstrong was one hell of a pilot!
$endgroup$
– andy256
Oct 16 at 9:44
$begingroup$
When I was a physics student the HP-25 came out; it could hold 49 program steps. I programmed it to act as a lunar lander, where the user entered burn duration and thrust. All the physical constants, lander mass, fuel mass, initial velocity, and altitude were correct. I ignored attitude / vector control, but the point is that it was 49 steps and correct. And it was damn hard to land. Neil Armstrong was one hell of a pilot!
$endgroup$
– andy256
Oct 16 at 9:44
add a comment
|
Thanks for contributing an answer to Space Exploration Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fspace.stackexchange.com%2fquestions%2f39336%2fgiven-small-computational-resources-how-navigation-was-implemented-not-samples%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
1
$begingroup$
not an answer, but links in answers to the following questions might be helpful: How did the Apollo computers evaluate transcendental functions like sine, arctangent, log? as well as How did the Apollo guidance computer handle parity bit errors? and also How did the Apollo guidance computer handle the Earth-Moon system's rotation around the Sun?
$endgroup$
– uhoh
Oct 14 at 10:27
1
$begingroup$
The relationship is both 'space probes' and ICBMs need to leave earth first. - "Samples of old guidance software" for a Saturn-V... you will not find.
$endgroup$
– Mazura
Oct 14 at 23:13
3
$begingroup$
@Arjang, orbital-transfer software might not be suitable for ICBMs, but I'm pretty sure the Apollo re-entry guidance software is.
$endgroup$
– Mark
Oct 15 at 6:44
3
$begingroup$
Even something simple like notepad has a whole host of OS compatibility layers, the gui and other libraries it's built on... It's written in a fairly high level language, targeted to the fairly complex x86 platform and has a surprising amount of functionality. When you get down to assembly and optimising for size, you can do incredible things - e.g. theverge.com/2012/5/14/3014698/assembly-4k-demoscene-fractals
$endgroup$
– Baldrickk
Oct 15 at 10:31
7
$begingroup$
As a programmer I strongly encourage you to try microcontroller programming. An Arduino is a great introduction. The basic model has 1k of RAM. You'd be amazed at how much you can achieve with 1k of RAM. People have written everything from quadcopter (drone) controllers, radio-control receivers, walking robot controllers to airplane autopilot guidance/navigation software all in 1k of RAM. I started microcontroller programming with the PIC16F84 which has 68 bytes (yes, bytes, not kilobytes) of RAM and I implemented a lot of projects with it.
$endgroup$
– slebetman
Oct 16 at 4:26