OrbitalHub

The place where space exploration, science, and engineering meet

Domain is for sale. $50,000,000.00 USD. Direct any inquiries to contact@orbitalhub.com.

Archive for the The Best Of category

 

 

I recently watched a very interesting DEF CON 26 talk given by three investigative journalists. The journalists present their findings about fraudulent pseudo-academic conferences and journals. There are fake science factories that are cashing in on millions of dollars every year, while giving studies scientific credibility. We should not underestimate the damage these pseudo-academic conferences can do to society.

Predatory open-access publishing is an open-access academic publishing business model that is charging fees to authors without providing the services associated with legitimate journals. The model is exploitative. Academics are tricked into publishing whithout benefiting from editorial and publishing services.

Similarly, predatory conferences/meetings, despite being set up to appear as legitimate scientific conferences, do not provide proper editorial control. These conferences also claim involvement of prominent academics, which are not involved.

The characteristics associated with predatory open-access publishing include:

  • Accepting articles quickly with little or no peer review or quality control, including hoax and nonsensical papers.
  • Notifying academics of article fees only after papers are accepted.
  • Aggressively campaigning for academics to submit articles or serve on editorial boards.
  • Listing academics as members of editorial boards without their permission, and not allowing academics to resign from editorial boards.
  • Appointing fake academics to editorial boards.
  • Mimicking the name or web site style of more established journals.
  • Making misleading claims about the publishing operation, such as a false location.
  • Using ISSNs improperly.
  • Citing fake or non-existent impact factors.

Characteristics of predatory conferences/meetings include:

  • Rapid acceptance of submissions with poor quality control and little or no true peer review.
  • Acceptance of submissions consisting of nonsense and/or hoaxed content.
  • Notification of high attendance fees and charges only after acceptance.
  • Claiming involvement of academics in conference organizing committees without their agreement, and not allowing them to resign.
  • Mimicry of the names or website styles of more established conferences, including holding a similarly named conference in the same city.
  • Promoting meetings with unrelated images lifted from the Internet.

You might ask, why mention this on a space blog? Well, this affects the scientific community overall, and there are quite a few aerospace pseudo-academic conferences out there that employ these practices. Heads up!

The above-mentioned DEF CON talk is available on YouTube and I encourage you to take the time to watch it: DEF CON 26 – Svea, Suggy, Till – Inside the Fake Science Factory.

References and other useful links:

Predatory open-access publishing

Predatory conference

Beall’s List

 

  • Facebook
  • Google
  • Slashdot
  • Reddit
  • Live
  • TwitThis

 

 

Abstract

The blogpost explores relativistic effects on proof-of-work based cryptocurrency protocols. Cryptocurrencies are here to stay and it is quite plausible that future human colonists spread across the solar system and beyond will use a decentralized cryptocurrency as opposed to a fiat currency issued by a central authority. The low transaction fees, the ubiquitous access, not being bound by exchange rates or interest rates, not being controlled by financial institutions who are serving foreign interests — these are some of the advantages cryptocurrencies will enjoy in the thriving exo-economy.

Motivation

At present, on a cryptocurrency network, the information exchanged by the nodes in the network reaches each node almost instantaneously. The speed at which the TCP/IP packets travel on the network and the fact that the Internet spans only the Earth and the LEO makes this possible. However, once the region the network is spread across will reach certain boundaries, the network size would have negative impact on the network: increasing communication failures due to network delays, more frequent and longer blockchain forks as part of the proof-of-work protocol, network segregation into local sub-networks (let us call them topological forks), just to name a few.

Nullius in verba… as they say. Let’s go deeper into details and figure out a feasible solution for a truly interplanetary cryptocurrency.

Cryptocurrency 101

You can think of a cryptocurrency as a digital money ecosystem. Plain and simple. A collection of technologies are part of this ecosystem, all of them the result of years of research in the cryptography and distributed systems fields: a decentralized network of computers, a public transaction ledger — also known as a blockchain, a protocol that consists of a set of rules for transaction validation, a decentralized consensus mechanism. [ANTO]

A decentralized network of computers ensures the resilience of the network. We can think of both computing power and data storage capabilities. As the computing power is distributed across the network, any disruption can be successfully handled by the network. Transaction data resides on all nodes of the network. This implies that even physical damage done to network nodes will not take out the network.

The blockchain is a public distributed ledger which stores all transactions handled by the network. As the name suggests, the blockchain is a list of data blocks. Each of these blocks contains a set of transactions, as many as can fit in the block, given the maximum size of the block (a characteristic of the network). Transactions contain sender and receiver info, and the amount/asset that is changing ownership, and they are broadcasted and added to blocks by network nodes. Blocks are linked to the previous block in the chain by a cryptographic hash (the hash of the previous block becomes part of the current block). This backward link leads all the way to the first block in the chain, the Genesis block. Each cryptocurrency blockchain has one. The cryptographic hashes have an important role in protecting the blockchain from tampering attempts.

The transaction validation rules enforced by the nodes in the network ensure that the content of each block in the blockchain is valid. By far, the most frequent form of fraud is double spending. The validation process makes sure that the inputs on the transactions exist and that they have not been already spent. The transactions marked as invalid are rejected by the network and do not make it on the blockchain.

The consensus mechanism is designed so that all the nodes in the network can agree on the set of transactions to be included in the current block. It shifts the authority and the credibility required by a central clearing house to a network of nodes. Important to mention here that, inherently, the nodes do not trust each other, and they do not have to because the trust is enforced by the consensus mechanism itself.

The Proof-of-Work

We mentioned above that one of the key innovations that make cryptocurrencies possible is the decentralized consensus mechanism. Currently, the de facto consensus mechanism is the proof-of-work.

The proof-of-work consensus mechanism was proposed by Satoshi Nakamoto in 2008 [NAKA]. In general, a proof-of-work is a piece of data that requires costly and time consuming computational effort, but it is very easy to verify. A very good analogy is one of your semester projects in college… it takes you a whole semester to finalize it, but it takes very little time and effort for your supervisor to evaluate and grade it. Similarly, proof-of-work is used by cryptocurrencies for block generation. For the network to accept the block, nodes must complete a proof-of-work which also guarantees the integrity of the transactions included in the block. The difficulty of the proof-of-work is adjusted by the network so that a new block can be generated every fixed time interval. This fixed time interval is characteristic to every cryptocurrency. Most notably, for Bitcoin this time interval is set to 10 minutes.

Satoshi Nakamoto’s solution, emobodied by the proof-of-work algorithm above, achieves consensus on the network without a central trusted authority (hence the name decentralized consensus mechanism).

Enter the Ansible

For the astute reader, it is quite obvious at this point that communication delays between the network nodes have a direct effect on the protocol described above. An important assumption made when the protocol was designed is that these delays are small, and this is why they can be handled by the network when the nodes choose to follow and validate the blocks on the longest blockchain fork. Also, this is the reason why all the nodes in the network can have a say in the network consensus. They all can find a solution for the current block, and they all can be rewarded when they find the solution. The communication delays witnessed by the peripheral nodes will cripple their ability to find solutions and these nodes will not be incentivized to remain in the network. Hence, the segregation of the network effect mentioned earlier.

Unfortunately, given the current technological level of our civilization, we do not have at our disposal a technology that allows us to communicate fast and reliable over the large expanse of space. It would take a hell of a wait time to proces a payment made by a mining corporation in the TRAPPIST-1 system, 12 parsecs within the constellation of Aquarius, to a planetary engineering corporation located in the Sol system… twice 39 years plus the block confirmation time.

Fortunately, the Sci-Fi literature already offers a solution for our problem. For those of you, Sci-Fi nerds like myself, that have already read Rocannon’s World [LEGU] and Ender’s Game [CARD], the Ansible device must sound very familiar. The ansible is a fictional device capable of faster-than-light communication. To word this differently, an operator of such device can send and receive messages to and from another device over any distance with no delay.

Hence, even if only in the realm of science fiction, we will be able to devise a solution for the problem that the future cryptocurrency enthusiasts, living in the outer space colonies, will eventually face.

The Special Relativity Theorem

Before elaborating more on a required upgrade of the network protocol, we have to discuss the special relativity theorem and its implications on how time and distance are perceived in reference systems.

Albert Einstein was awarded the Nobel Prize in Physics in 1921. He received it for his contributions to the understanding of the photoelectric effect, after publishing a paper on it in 1905. At that point his contributions to the understanding of gravity through his theory of relativity were well known, but the new perspective on gravity offered by Einstein’s theory was so controversial that the Nobel Prize Committee members chose to protect their reputation. They decided that it was appropriate to award Einstein the Nobel Prize for “his services to theoretical Physics, and especially for his discovery of the law of the photoelectric effect.”

During his research Einstein attempted to reconcile the principle of relativity with the principle of the constancy of the velocity of light. This attempt led Einstein to the discovery of the special relativity theorem. Einstein’s Gedankenexperiment (thought experiment) with a test subject travelling by train is very well known in the scientific community. If our human subject is walking towards the front of the train with velocity w with respect to the train, and the train is moving with velocity v with respect to the embankment, then an observer on the embankment will measure as W = v + w the velocity of our subject on the train. If instead of our traveller, we consider a beam of light propagating with velocity c, the velocity measured by the observer on the embarkment would be v + c. However, this violates the principle that the velocity of light is constant in any inertial reference system and equal with c. Einstein found a solution for this problem and thus resolved the incompatibility.

One direct consequence of the special relativity theorem is the Lorentz transformation.

Before the relativity theorem, time in physics had an absolute significance, independent of the state of motion of the reference system. However, every inertial reference system has its own particular time. We always have to be told which inertial reference system the statement of time refers to. If you look at your watch, the time you read is the time as measured in the Earth inertial reference system. By the way, the Earth circles the Sun on an almost circular orbit, which means that any body maintaining a constant position in the Earth reference system should experience some centripetal force. However, this centripetal component is negligible compared to the gravitational pull of the Earth. Hence, engineers use this aproximation when calculating satellite orbits.

Back to our original trail of thoughts… the axiom that lays the foundation for the Lorentz transformation states that every ray of light possesses the velocity of transmission c relative to any inertial reference system. That is the velocity of transmission in vacuo (in a vacuum).

If following Einstein’s thought experiments [EINS], the above-mentioned axiom leads to a set of four equations that explain the relation between dimensional and temporal coordinates in two inertial reference systems:

  x’ = (x – vt)/sqr(1 – v2/c2)
  y’ = y
  z’ = z
  t’ = (t – x*v/c2)/sqr(1 – v2/c2)

where:

 x, y, z, t coordinates in inertial reference system K,
 x’, y’, z’ , t’ coordinates in inertial reference system K’,
 v the relative velocity between inertial reference systems;

also, the expression 1//sqr(1 – v2/c2) is known as the Lorentz factor.

The Lorentz transformation equations are a more general case of the Galilean transformation:

  x’ = x – vt
  y’ = y
  z’ = z
  t’ = t

which is the basis of the assumptions of the classical mechanics as to the absolute character of both dimensional and temporal coordinates. The above are the result of replacing velocity c with ∞ in the Lorentz transformation equations.

The Lorentz transformation has a few corollaries which have an impact on time dilation, length contraction, relativistic mass, relativistic momentum, and relativistic kinetic energy. The one that concerns us is the time dilation corollary:

 Δt = γΔt’

where γ is defined by

 α = 1/γ = sqr(1 – v2/c2)

Assuming a clock at rest in inertial reference system K, and moving with the velocity v in the inertial reference system K’, the time Δt’ between two ticks as measured in the frame K’ is longer than the time Δt between the same ticks as measured in the rest frame of the clock, K.

Relativistic Effects on Proof-of-Work

To better understand the relativistic effects on how relative time is perceived in two inertial reference systems (or Galilean reference systems [EINS]) we have to mention the twin paradox. The twin paradox is a thought experiment that involves two identical twins. One of the twins makes a journey into space onboard a relativistic spaceship, and upon his return to Earth discovers that his twin has aged much more than he did. Depending on how fast the spaceship moves through space and/or how long the journey was, our traveller could return to Earth and realize that several generations have passed meanwhile.

As a direct application of the twin paradox, we have Mazer Rackham, International Fleet Admiral and Ender Wiggin’s mentor, as portrayed by Orson Scott Card [CARD]. The fighter pilot who destroyed the Formic Fleet Flagship, killed the Hive Queen, and ended the Second Formic Invasion, Mazer Rackham is sent on a journey on a relativistic spaceship and returns to Earth 100 years later in order to assist the International Fleet. Upon his return he has barely aged a few years.

Similarly, if a subnetwork is moving relative to the rest of the network, the nodes in the subnetwork would experience time dilation as predicted by the special relativity theorem. Hence, they are at a disadvantage compared to the rest of the nodes because they have less time available to find a solution for the proof-of-work problem. In order to make thing square for all the nodes in the network, the difficulty of the problem should be adjusted in each subnetwork using the Lorentz factor: in order to compensate for time dilation, the relativistic nodes should have to solve a simpler problem. Also, their block confirmation time should decrease as well.

One element is missing still… how do ansibles synchronize? What value has the Lorentz factor for each one of them? One simple solution for this problem is having one of the ansibles (let us say the Earth-bound one) broadcasting a beacon every fixed number of seconds. The ansibles interfacing to subnetworks would pick up the broadcast, and by measuring the time interval between the beacons, infer the Lorentz factor. Once the Lorentz factor is determined, the nodes mining on the subnetworks will have their proof-of-work difficulty and the block confirmation time adjusted accordingly.

A more Down-to-Earth Solution for the Problem

We will fork the storyline (pun intended) at Enter the Ansible paragraph, and suggest a more down-to-earth solution for the problem. The peripheral nodes are not to participate in the consensus. Their function would be just to relay messages (a.k.a. payment information) across the network. In our multi-planetary scenario, the nodes working on the proof-of-work would reside on/around the Earth, and the network nodes on Mars or any other region in the Sol system would have to wait 2x (network packets travel time) + (block confirmation time) in order to confirm a payment. However, such a solution would have an Achilles heel very hard to defend… if starting its mining engine, any peripheral node could take over the local network and force a topological fork of the network. For a node, the incentive of staying honest on the current network must be stronger than any reward obtained by cheating.

Concluding remarks

While the blogpost contains some elements of fiction, it accurately describes the cryptocurrency ecosystem, the special relativity theorem, the Lorentz transformation and its corollaries. We would like to thank the readers that went through this exercise with us. Undeniably, cryptocurrencies are disrupting the global economy and they are here to stay. Undeniably, we — as a species — will explore and migrate farther than the LEO. History has taught us that decentralization is the key to survival and prosperity. Let us together make the future happen.

References

[ANTO] Antonopoulos, Andreas M., Mastering Bitcoin, Second Edition, O’Reilly Media, Inc., June 2017;
[CARD] Card, Orson Scott, Ender’s Game, A Tor Teen Book, 2014;
[EINS] Einstein, Albert, Relativity, The Special and the General Theory, Princeton University Press, 2015;
[LEGU] Le Guin, Ursula K., Rocannon’s World, Harper & Row, 1977;
[NAKA] Nakamoto, Satoshi, Bitcoin – A Peer-to-Peer Electronic Cash System, www.bitcoin.org, 2008;

 

  • Facebook
  • Google
  • Slashdot
  • Reddit
  • Live
  • TwitThis

 

Credits: SpaceX

 

 

 

Disruptive technology is a very bizarre (and scary) concept, but it is not a bizarre or scary idea. The concept was introduced by Clayton Christensen. In one of his books, The Innovator’s Dilemma, The Revolutionary Book That Will Change the Way You Do Business, Christensen proves that, under certain circumstances, companies that do things right can lose their market share or even get out of business. He also presents a set of rules that can help companies capitalizing on disruptive innovation.

 

While I am not trying to give a lecture on economics, I would like to understand how to apply (if possible) the principles of disruptive technologies to the space industry. A very good example is quite at hand… SpaceX.

 

 

We can start by defining the key concepts: sustaining technology and disruptive technology. These are the textbook definitions: A sustaining technology is a new technology that improves the performance of established products, the performance being perceived along the dimensions that mainstream customers value. A disruptive technology is a new technology that brings to market a radical value proposition. They underperform products in mainstream markets, but they have features that are valued by some customers.

 

What is not obvious is that even though disruptive technologies may result in worse product performance in the short term, they can be fully competitive in the same market in the long run because technologies tend to progress faster than market demand.

 

Now let us see what are the 5 principles of disruptive technologies (as defined by Clayton Christensen):

Principle #1: Companies depend on customers and investors for resources (at the end of the day, the customers and the investors dictate how a company spends its money).

Principle #2: Small markets do not solve the growth needs of large companies (large companies wait until small markets become interesting and to enter a small market at the moment when it becomes interesting is often too late).

Principle #3: Markets that do not exist cannot be analyzed (there are no established methods to study or to make predictions for emerging markets, as there is no data to infer from).

Principle #4: An organization’s capabilities define its disabilities (we all have our blind spots).

Principle #5: Technology supply may not equal market demand (as established companies move towards higher-margin markets, the vacuum created at lower price points is filled by companies employing disruptive technologies).

 

Why do you think established companies fail to adopt disruptive technologies? Established companies listen to their customers, invest aggressively only in new technologies that provide customers more and better products that they want, and they study their markets and allocate investment capital only to innovations that promise best return. Good management is sometimes the best reason why established companies fail to stay atop their industries.

 

And this is why technology startups can fill in the niche… Many of the good management principles widely accepted are only situationally appropriate. Sometimes it is right not to listen to your customers, right to invest in technology that promise lower margins, and right to pursue small markets. This can happen in a small company, a technology startup where big outside stakeholders are not vested, and where new technology development is the big drive.

 

Now that the lecture has been delivered, it is time to ask the questions. Why is SpaceX perceived as disruptive? Is SpaceX really disruptive? In what way?

 

The declared goal of SpaceX is to make space more accessible, that is to bring the kg-to-LEO prices down. If you have a basic knowledge of launch systems, you know that the propulsion technology employed today is pretty much the same used by Mercury, Gemini, and Apollo space programs: liquid fuel rocket engines. The Russian Soyuz, for which the basic rocket engine design has not changed much since the Semyorka days, is a living proof that rocket engineers do not want to fix things that work well. While aerospike engines and nuclear rocket engines make the front page from time to time, the good old liquid fuel expansion nozzle rocket engines will be here to stay for a long time.

 

Given the circumstances, how to bring the manufacturing and launch costs down? As a software engineer who spent a number of years in a software startup, I can recognize a number of patterns… First, Musk knows how to motivate his engineers. Doing something cool is a big driver. I know that. And working on a space launch system than one day may put the first human colonists on Mars must be a hell of a motivator.

 

Modular design… software engineering principles are at work. Build reliable components and gradually increase the complexity of your design. Falcon 9 and Falcon Heavy, are built on a modular design that has at the core the Merlin 1D engine. And an important detail to mention here, SpaceX builds the hardware in-house. Obviously, outsourcing would increase the manufacturing costs.

 

If you are familiar with the Russian Soyuz launch vehicle, you will acknowledge that Musk has borrowed proven (and cheaper) technology for Falcon launch vehicles: LOX/RP-1 as fuel, vernier thrusters, and horizontal integration for the first stage, second stage, and the Dragon spacecraft. These choices simplify the overall design and bring the costs down substantially.

 

To put it the way SpaceX many times did: “simplicity, reliability, and low cost can go hand-in-hand.”

 

One thing to notice is that the most important innovation introduced by SpaceX is in the design and manufacturing process, which is in-house and as flat as possible. Rearranging the pieces of the puzzle can often give the competitive advantage. Lean and mean is the new way.

 

SpaceX is not just trying to bring down the launch prices, it is actually trying to disrupt the status quo… and this makes the battle harder. SpaceX dixit: “SpaceX’s goal is to renew a sense of excellence in the space industry by disrupting the current paradigm of complacency and replacing it with innovation and commercialized price points; laying the foundation for a truly space-faring human civilization.”

 

When developing the theory around disruptive technologies, Clayton Christensen has studied the hard disk drive and the mechanical excavator industries. The US space industry is a different ecosystem. Do the 5 principles presented above need adjustment?

 

Not really. Principle #1 is valid and applies in this case as well. Self-funded SpaceX followed a market strategy not dictated by customers or investors. The small payload launcher market, targeted by SpaceX with Falcon 1 and Falcon 1e, was an area neglected by established space companies as Principle #2 states. Principle #3 explains why established companies have neglected the small payload market.

 

Does mastering the small payload launcher technology qualifies one to enter the heavy launcher market? SpaceX managed to overcome Principle #4. Will SpaceX retire its Falcon 1 launch vehicles and leave the small launcher market for good? In this case, I would see Principle #5 as a warning. While the heavy launchers offer better profit margins, would it be a smart move to leave an emerging market (currently) offering low profit margins? This remains to be seen.

 

 

  • Facebook
  • Google
  • Slashdot
  • Reddit
  • Live
  • TwitThis

 

Credits: NASA

 

The International Organization for Standardization (ISO) has implemented the UN Space Debris Mitigation guidelines in a number of standards.

 

The standards prescribe requirements that are derived from already existing international guidelines, but they capture industry best practices and contain specific actions to be taken by hardware manufactures to achieve compliance.

 

 

The highest level debris mitigation requirements are contained in a Space Debris Mitigation standard. This standard defines the main space debris mitigation requirements applicable over the life cycle of a space system and provides links to lower-level implementation standards. It is also important to be able to assess, reduce, and control the potential risks that space vehicles that re-enter Earth’s atmosphere pose to people and the environment. The Re-entry Risk Management standard provides a framework that is useful in this regard.

 

The seven guidelines endorsed by the United Nations Committee on the Peaceful Uses of Outer Space (COPUOS), also known as the Space Debris Mitigation Guidelines of COPUOS, are:

“limit debris released during normal operations;

minimize the potential for break-ups during operational phases;

limit the probability of accidental collision in orbit;

avoid intentional destruction and other harmful activities;

minimize potential for post-mission break-ups resulting from stored energy;

limit the long-term presence of spacecraft and launch vehicle orbital stages in LEO after the end of their mission;

limit the long-term interference of spacecraft and launch vehicle orbital stages with GEO region after the end of their mission;”

 

The good news is that as of the end of 2010, most of the space faring nations have implemented regulations on space debris mitigation at the national level.

 

 

  • Facebook
  • Google
  • Slashdot
  • Reddit
  • Live
  • TwitThis

 

Credits: CSA

 

Canada is actively involved in space debris mitigation research and development activities. At the international level, Canada hosted the International Conference on Protection of Materials and Structures from the Space Environment (ICPMSE) in May 2008, and contributed to the 37th Committee on Space Research (COSPAR) Scientific Assembly in July 2008.

 

 

At the national level, the space debris research and development activities are coordinated by the Canadian Space Agency (CSA), which formed the Orbital Debris Working Group (ODWG). The group was formed in order to address a number of objectives:

“to increase the Scientific and Technical (S&T) knowledge and awareness of orbital debris in the space community;

to identify and encourage targeted Research and Development (R&D) in orbital debris and mitigation measures;

to identify and encourage development of orbital debris detection and collision avoidance techniques and technologies;

to promote Scientific and Technical (S&T) collaboration across Canada and with our international partners;

to identify Scientific and Technical (S&T) opportunities in relation to future potential missions which can directly benefit from the results of targeted Research and Development (R&D) and novel operational techniques, and develop and coordinate technical solution in Canada and with international partners; and

to establish and maintain technical liaison with our international partners in order to foster a sustainable space environment.”

 

The Canadian space debris mitigation research and development activities are focused on three main areas: hypervelocity impact facilities, debris mitigation and self healing materials, and spacecraft demise technologies. Hypervelocity impact facilities are facilities that are capable of accelerating projectiles to velocities of more than 10 km/s. Canada is developing an implosion-driven hypervelocity launcher facility. Such a facility could accelerate projectiles having a mass of 10 g to speeds of 10 km/s, facilitating meaningful impact studies. Self healing materials have the capability to initiate a self healing process after an impact, being an in-situ mitigation of space debris damage on board spacecraft. The Canadian Space Agency has supported the efforts to develop and test a self healing concept demonstrator. The spacecraft demise technologies ensure intentional and integral disintegration during re-entry, so that no debris reaches Earth. In this direction, studies that investigate various technologies that could be used to de-orbit micro- and nanosatellites have been conducted.

 

In Canada, the space operators and manufacturers are adopting the space debris mitigation measures on a voluntary basis. The Inter-Agency Space Debris (IADC) guidelines are used for monitoring activities to prevent on-orbit collisions and conduct post-mission disposal. There are also strict requirements integrated in its policies and regulations that address the post-mission disposal of satellites. For example, as required by the Canadian Remote Sensing Space System Act, space system manufacturers have to provide information regarding the method of disposal for the satellite, the estimated duration of the satellite disposal operation, the probability of loss of human life, the amount of debris expected to reach the surface of the Earth upon re-entry, an estimate of the orbital debris expected to be released by the satellite during normal operations by explosion, etc. There are also interesting recommendations made for the operation and post-mission disposal of satellites in Geostationary Orbits. The Environmental Protection of the Geostationary Satellite Orbit recommends “that as little debris as possible should be released into the geostationary orbit during the placement of a satellite in orbit”, and also that “a geostationary satellite at the end of its life should be transferred, before complete exhaustion of its propellant, to a super synchronous graveyard orbit”, where the recommended minimum re-orbiting altitude is given as 300 km.

 

 

  • Facebook
  • Google
  • Slashdot
  • Reddit
  • Live
  • TwitThis

 

Credits: NASA

 

Let us see how the areas mentioned in the previous Sustainability in LEO post are covered at national level in the United States.

 

The United States has implemented a space traffic management program in the form of the Joint Space Operations Center (JSpOC) of the U.S. Strategic Command at Vandenberg Air Force Base in California.

 

 

JSpOC conducts periodic conjunction assessments for all NASA programs and projects that operate maneuverable spacecraft in low Earth orbits (LEO) or in geosynchronous orbits (GEO). Depending on the mission, the conjunction assessments can be performed up to three times daily. If JSpOC identifies an object that is expected to come in the proximity of a NASA spacecraft, and the collision risk is high enough (for manned missions the minimal value accepted is 1 in 10,000, while for robotic missions the threshold is 1 in 1,000), a conjunction assessment alert message is sent to the mission control in order to have collision avoidance maneuver commands sent to the spacecraft. The alert messages contain the predicted time and distance at closest approach, as well as the uncertainty associated with the prediction.

 

The control of the creation of space debris is addressed by orbital debris mitigation standard practices in four major areas: normal operations, accidental explosions, safe flight profile and operational configuration, and post-mission disposal of space structures. There are also NASA standards and processes that aim at limiting the generation of orbital debris.

 

The commonly-adopted mitigation methods, which focus on minimization of space debris creation, will not preserve the near-Earth environment for the future generations. As a matter of fact, the debris population increase will be worse than predicted by LEGEND-generated models due to ongoing launch activities and unexpected (but possible) major breakups. Here is where active space debris environment remediation comes into play.

 

The active space debris environment remediation is mainly concerned with the removal of large objects from orbit. Such large objects are defunct spacecraft (i.e. communication satellites that exceeded their operational life), upper stages of launch vehicles, and other mission-related objects. The removal of large objects from orbit is known as Active Debris Removal (ADR). Several innovative concepts are under study. Among them, tethers used for momentum exchange or electro-dynamic drag force, aerodynamic drag, solar sails, and auxiliary propulsion units. LEGEND studies have revealed that ADR is a viable control method as long as an effective removal selection criterion based on mass and collision probability is used, and there are at least five objects removed from orbit every year. The electrodynamic tethers seem to lead the competition so far, as they have a low mass requirement and can remove spent or dysfunctional spacecraft from low Earth orbit rapidly and safely.

 

Re-entry in the Earth’s atmosphere of space mission related objects is an important aspect to be considered in this context. Even though no casualties or injuries have been reported so far being caused by components of re-entering spacecraft, fragments from space hardware pose a risk to human life and property on the ground. One big concern is caused by the fact that the point of impact from uncontrolled re-entries cannot be calculated exactly. The uncertainties are due to a large number of parameters that affect the trajectory and the heat of ablation of objects re-entering the atmosphere.

 

 

  • Facebook
  • Google
  • Slashdot
  • Reddit
  • Live
  • TwitThis