OrbitalHub

The place where space exploration, science, and engineering meet

Domain is for sale. $50,000,000.00 USD. Direct any inquiries to contact@orbitalhub.com.

Archive for the The Best Of category

 

 

Today we are joined by Yasunori Yamazaki, Chief Business Officer at Axelspace. Axelspace are pioneers of microsatellite technology advancing the frontiers of space business, reimagining traditional ways of using space, and creating a society where everyone on our planet can make space part of their life.

Orbital Hub: Axelspace’s goal is to advance the frontiers of space business. How is Axelspace making space more accessible?

Yasunori Yamazaki: Our vision is to bring the space technology down to earth for universal access, empowering everyone with actionable earth observation data to make smart decisions.

O.H.: Could you share any details about innovative technologies used by Axelspace when designing and building satellites?

Yasu: We have been developing satellites for more than 11 years now, experimenting with various methods and implementing new technology to constantly improve and innovate. This trial and error itself is a new concept in our industry as the cost of making a mistake is prohibitive from an investment perspective.

O.H.: What is the approach used by Axelspace for microsatellite design? Do you use custom designs specific to each mission or a modular design that allows reuse and minimal mission specific customization?

Yasu: The designing process depends on the mission. For unique purposes, we will start with a whiteboard, deep diving into the problem and figuring out the most efficient and effective way of delivering the solution. We are also in the process of constructing an orbital infrastructure, based on proprietary modulated satellite, GRUS, to bring down the cost of manufacturing, thus passing on the savings to the users of the data.

O.H.: What payload types can be integrated with Axelspace microsatellites?

Yasu: Most anything can be carried by our microsatellites, as we can build from small to large satellites. The largest we have successfully deployed into space is a 200 kg satellite, which is a fantastic platform to carry most any payloads, but in a radically cost effective way.

O.H.: What type of stabilization is used by Axelspace microsatellites?

Yasu: We don’t comment on specific internal technology.

O.H.: What type of propulsion systems are integrated with Axelspace microsatellites? Are they mission specific?

Yasu: We don’t comment on specific internal technology.

O.H.: Is Axelspace designing and manufacturing only remote sensing microsatellites?

Yasu: We have been focusing on perfecting our expertise on remote sensing microsatellites. As we are market driven company, our limitation is not technology, but true market demand. Our business team is constantly monitoring the trends in the market and ready to dive into any direction when the time is ripe.

O.H.: Any plans for deep space exploration missions? Could the current bus be repurposed for a deep space mission?

Yasu: We are open for any mission, as long as there is a concrete market and sustainable paying clients. The company never works on a technology, without concrete business visibility.

O.H.: Remote sensing satellites are usually deployed on Sun-synchronous polar orbits. This leads to crowded LEO and increased collision risks above the polar regions. What end-of-life strategies are Axelspace missions using?

Yasu: As a constellation player, we are conscious of EOL operation and complies with the international guidelines on securing the sustainable usage of our orbits.

O.H.: What is AxelGlobe?

Yasu: AxelGlobe is a web based platform to access earth observation data from our proprietary satellite, GRUS, to empower anyone with actionable data to make smart decision.

O.H.: Launching and managing a fleet of 50+ microsatellites in LEO must be a challenging endeavour. Can you elaborate on some of these challenges? How is Axelspace tackling them?

Yasu: Absolutely! There is no shortcut in implementing space technology. To be successful in this business, these are the 4 most important simple, yet critical points to cover:

1. Transformational IDEA to bring value to the market
2. Proven Engineering to bring IDEA into product
3. Solid Financial Resource to bring product into reality
4. Paying clients to have a sustainable business model

To achieve the above, we have inspiring leadership team that brings IDEA to the table, experienced engineer team that can convert anything into a product, insightful finance team to secure the funding and powerful business team to generate revenue for the TEAM.

O.H.: What does the future hold for Axelspace holding? Any exciting plans to share with our readers?

Yasu: When we started the company 11 years ago, no one believed that a startup can actually do anything meaningful in the space industry. Now, after years of hard work, we have 5 operating satellites in space. Next year, we have 4 more confirmed launches and will continue to deploy every year. As a pioneer in the commercial microsatellite world, we will keep working hard and focus on engineering for good.

 

  • Facebook
  • Google
  • Slashdot
  • Reddit
  • Live
  • TwitThis
October 4, 2019

Supply Chain in the Cislunar Space

Posted by

 

 

Today we are joined by Logan Ryan Golema, Founder & Principal, and Vishal Singh, Chief Scientist at Lunargistics. Lunargistics is the Space Division of Hercules Supply Chain Protocol, and it is aiming to provide swift logistics in cislunar space. Logan and Vishal were kind to answer a few questions about Lunargistics and the supply chain in the cislunar space.

Orbital Hub: How big of a risk are the counterfeit components in the aerospace supply chain?

Logan Ryan Golema: You’d be surprised, I know I was. The Aerospace industry has three types of companies; those that make their own parts, those that buy their parts, and those that sell parts. And some of them do all three! These industries are often involved with local manufacturers hence the risk of fraud is very high.

Vishal Singh: More often than not everything is OK and well documented, but when there’s a mistake or a fraudulent document on a fake part disaster can happen. Those disasters can be catastrophic as any aerospace structures when in air or in orbit can take lives on land catastrophically. So if a fraudulent document or some error comes it is a man made disaster. When we talk about a space mission; an inch of error in calculation due to fraudulent documents can lead to a war between States or even worse taking lives of thousands of innocents.

O.H.: How is blockchain technology used to mitigate the risk of counterfeit components in the aerospace supply chain?

L.R.G.: Blockchain solves a lot of issues; from fraudulent documents to manufacturing and maintenance of Airplanes to rockets. It is like providing a birth certificate and an IMEI to each component and will result in understanding the root cause of every single problem occurred while in flight or in manufacturing.

V.S.: Let’s take the example of India’s ambitious mission Chandrayaan-2, which failed probably due to failure of power and communication systems. Using the blockchain in the industry will make the “may” in the statement a definite answer to the cause of failure.

O.H.: What blockchain infrastructure is Lunargistics using?

L.R.G.: Lunargistics will be leveraging the Hercules Blockchain Protocol (https://herc.one). Onboarding existing Aerospace companies in Europe and across the globe to this powerful tool with Enterprise level APIs and high performance apps is our aim. We’re set up with the client in mind so they can focus on their mission while we handle the blockchain side of things.

O.H.: What are the defining features of this blockchain infrastructure?

L.R.G.: The interoperability and layering of modular based components. The Hercules Protocol acts sort of like a LAMP stack of old. Today with Lunargistics managing your HERC stack you’ll have:
– indisputable data integrity,
– timestamped uploads,
– files that will be accessible without fail,
– portfolios of persons involved in the manufacturing of something so small as a screw to the powerhouse of an engine.

It’s like having the birth certificate and report card of each component. By having a blockchain system based on the Hercules module will lead in minimising the failures like Israel’s moon mission and Chandrayaan-2.

O.H.: Is it possible to use a public bockchain infrastructure and, at the same time, address the privacy concerns in the aerospace industry?

L.R.G.: We’ve found a way to integrate a hybrid model of privacy while leveraging public chains. On the flip side, we do offer build outs of private infrastructure that can be available just to the client’s network. Its wholly up to the necessities of the mission and we pride ourselves in our ability to adapt.

O.H.: Is the cislunar space the first step? Does Lunargistics have plans to expand beyond that?

L.R.G.: I’d say if we can manage the market on Earth’s Cislunar space we’re doing good. Lunargistics doesn’t just have to be our Moon though. We’d love to scale to Titan or Europa when the timing is right.

V.S.: Even in the dawn of next decade we may have begun our plans of working with NEO mining companies and fulfilling needs of our the Econosphere. Our expert team has enough time to plan giving a robust buffer which will help us reach the desired goals.

O.H.: What does the near future hold for Lunargistics? Can you share any exciting plans with our readers?

L.R.G.: We’re hard at work onboarding the team that will bring us closer to our goals. As a ‘New Space’ company we’re excited to be accepted into the community by your readers.

Any aerospace companies that want to understand blockchain while keeping focused on their own mission should email us at partnerships@lunargistics.lu.

We’re also hiring! So suit up for the next mission and submit your CVs to careers@lunargistics.lu!

 

  • Facebook
  • Google
  • Slashdot
  • Reddit
  • Live
  • TwitThis

 

 

The complexity of aerospace systems is increasing exponentially. Both hardware and software subsystems are becoming more complex and encompassing systems’ behaviour becomes difficult to model due to the dependencies, relationships, and other interactions between their components. Predictable behaviour of complex aerospace systems translates into the reliability of each of their subsystems.

According to published reports the amount of total counterfeiting globally has reached 1.2 trillion USD in 2017, and it is predicted to reach 1.82 trillion USD by 2020. Counterfeiting affects all industries, aerospace and defence included. It turns out that identifying counterfeit components in the aerospace and defence supply chain is really challenging. In 2011 it was estimated that up to 15% of spare parts and replacement used by the US military were counterfeit. In a 9-page report dated November 4, 2016, obtained by Reuters through a freedom of information request, the Federal Aviation Authority (FAA) said 273 affected parts were installed in an unspecified number of Boeing 777 wing spoilers.

Having counterfeit components entering the aerospace market leads to decreased reliability of subsystems used in the aerospace industry. The consequences of using unreliable components in the aerospace and defense industries should not be underestimated or ignored for that matter. Parts that are manufactured for launch systems, spacecraft, aircraft, and weapon systems, and do not meet the required specifications should stay out of the supply chain.

There are various counterfeiting methods. Just to give an example, counterfeiting methods employed in the electronics supply chain include:

  • Remarking of new or already used components with false manufacturer names, part numbers, date codes, lot numbers, quality levels. One way to identify remarked electronics is to engage the original manufacturers. However, there were cases when remarking was performed by the original manufacturer.
  • Reuse of already used components. The increasing recycling of electronics is causing this trend. Certain countries import used electronics and return to the marketplace components removed from the discarded circuit boards.
  • Outsourcing production to production facilities that are not employing proper testing or do not meet specifications.
  • False approval markings used by manufacturers that skip the required certification process.

In order to protect itself, the aerospace and defense industry enforces quality management systems standards. The AS9100 standard is a quality management systems standard that includes requirements for aviation, space, and defense organizations. The AS9100 standard includes ISO 9001 quality management system requirements and, in addition, specifies aviation, space, and defense industry requirements. It is important to note that the requirements contained in AS9100 are complementary to existing customer or applicable statutory and regulatory requirements. Also, the customer or applicable statutory and regulatory requirements take precedence. The requirements of the standard are applicable to any organization, regardless of type, size, products or services it provides.

AS9100 defines counterfeit product as “An unauthorized copy, imitation, substitute, or modified part, which is knowingly misrepresented as a specified genuine part of an original or authorized manufacturer. NOTE: Examples of a counterfeit part (e.g., material, part, component) can include, but are not limited to, the false identification of marking or labeling, grade, serial number, date code, documentation, or performance characteristics.”

How is AS9100 helping combat the acceptance of counterfeit components in the aerospace and defense supply chain? A number of AS9100 clauses provide requirements relating to the mitigation and prevention of counterfeit components. These clauses are Counterfeit Part Prevention, Control of External Providers, and Information to External Providers. The Counterfeit Part Prevention clause states: “the organization shall plan, implement and control a process appropriate to the product that prevents the use of counterfeit product and either inclusion in product(s) delivered to the customer.”

Also, the Control of Nonconforming Outputs clause requires “counterfeit, or suspect counterfeit, parts shall be controlled to prevent reentry into the supply chain. Unsalvageable and counterfeit parts shall be conspicuously and permanently marked, or positively controlled, until physically rendered unusable to prevent restoration.”

The aerospace industry continues to allow manufacturers to maintain sole responsibility for their own manufacturing records. Also, the proliferation of practices known as “source delegation” and “self-regulation” place the responsibility for supporting documentation solely in the hands of suppliers. While the above-mentioned AS9100 clauses can help alleviate some of these issues, there is an immediate need for supply chain traceability. Employing an industry-wide supply chain database and guaranteeing access to all quality-related documentation seems to offer effective means for countering counterfeit components in the aerospace and defense industry.

References and other useful links:

Counterfeit examples for electronic components

Wikipedia article on AS9100 standard

Quality digest article on AS9100 standard

 

  • Facebook
  • Google
  • Slashdot
  • Reddit
  • Live
  • TwitThis

 

 

I recently watched a very interesting DEF CON 26 talk given by three investigative journalists. The journalists present their findings about fraudulent pseudo-academic conferences and journals. There are fake science factories that are cashing in on millions of dollars every year, while giving studies scientific credibility. We should not underestimate the damage these pseudo-academic conferences can do to society.

Predatory open-access publishing is an open-access academic publishing business model that is charging fees to authors without providing the services associated with legitimate journals. The model is exploitative. Academics are tricked into publishing whithout benefiting from editorial and publishing services.

Similarly, predatory conferences/meetings, despite being set up to appear as legitimate scientific conferences, do not provide proper editorial control. These conferences also claim involvement of prominent academics, which are not involved.

The characteristics associated with predatory open-access publishing include:

  • Accepting articles quickly with little or no peer review or quality control, including hoax and nonsensical papers.
  • Notifying academics of article fees only after papers are accepted.
  • Aggressively campaigning for academics to submit articles or serve on editorial boards.
  • Listing academics as members of editorial boards without their permission, and not allowing academics to resign from editorial boards.
  • Appointing fake academics to editorial boards.
  • Mimicking the name or web site style of more established journals.
  • Making misleading claims about the publishing operation, such as a false location.
  • Using ISSNs improperly.
  • Citing fake or non-existent impact factors.

Characteristics of predatory conferences/meetings include:

  • Rapid acceptance of submissions with poor quality control and little or no true peer review.
  • Acceptance of submissions consisting of nonsense and/or hoaxed content.
  • Notification of high attendance fees and charges only after acceptance.
  • Claiming involvement of academics in conference organizing committees without their agreement, and not allowing them to resign.
  • Mimicry of the names or website styles of more established conferences, including holding a similarly named conference in the same city.
  • Promoting meetings with unrelated images lifted from the Internet.

You might ask, why mention this on a space blog? Well, this affects the scientific community overall, and there are quite a few aerospace pseudo-academic conferences out there that employ these practices. Heads up!

The above-mentioned DEF CON talk is available on YouTube and I encourage you to take the time to watch it: DEF CON 26 – Svea, Suggy, Till – Inside the Fake Science Factory.

References and other useful links:

Predatory open-access publishing

Predatory conference

Beall’s List

 

  • Facebook
  • Google
  • Slashdot
  • Reddit
  • Live
  • TwitThis

 

 

Abstract

The blogpost explores relativistic effects on proof-of-work based cryptocurrency protocols. Cryptocurrencies are here to stay and it is quite plausible that future human colonists spread across the solar system and beyond will use a decentralized cryptocurrency as opposed to a fiat currency issued by a central authority. The low transaction fees, the ubiquitous access, not being bound by exchange rates or interest rates, not being controlled by financial institutions who are serving foreign interests — these are some of the advantages cryptocurrencies will enjoy in the thriving exo-economy.

Motivation

At present, on a cryptocurrency network, the information exchanged by the nodes in the network reaches each node almost instantaneously. The speed at which the TCP/IP packets travel on the network and the fact that the Internet spans only the Earth and the LEO makes this possible. However, once the region the network is spread across will reach certain boundaries, the network size would have negative impact on the network: increasing communication failures due to network delays, more frequent and longer blockchain forks as part of the proof-of-work protocol, network segregation into local sub-networks (let us call them topological forks), just to name a few.

Nullius in verba… as they say. Let’s go deeper into details and figure out a feasible solution for a truly interplanetary cryptocurrency.

Cryptocurrency 101

You can think of a cryptocurrency as a digital money ecosystem. Plain and simple. A collection of technologies are part of this ecosystem, all of them the result of years of research in the cryptography and distributed systems fields: a decentralized network of computers, a public transaction ledger — also known as a blockchain, a protocol that consists of a set of rules for transaction validation, a decentralized consensus mechanism. [ANTO]

A decentralized network of computers ensures the resilience of the network. We can think of both computing power and data storage capabilities. As the computing power is distributed across the network, any disruption can be successfully handled by the network. Transaction data resides on all nodes of the network. This implies that even physical damage done to network nodes will not take out the network.

The blockchain is a public distributed ledger which stores all transactions handled by the network. As the name suggests, the blockchain is a list of data blocks. Each of these blocks contains a set of transactions, as many as can fit in the block, given the maximum size of the block (a characteristic of the network). Transactions contain sender and receiver info, and the amount/asset that is changing ownership, and they are broadcasted and added to blocks by network nodes. Blocks are linked to the previous block in the chain by a cryptographic hash (the hash of the previous block becomes part of the current block). This backward link leads all the way to the first block in the chain, the Genesis block. Each cryptocurrency blockchain has one. The cryptographic hashes have an important role in protecting the blockchain from tampering attempts.

The transaction validation rules enforced by the nodes in the network ensure that the content of each block in the blockchain is valid. By far, the most frequent form of fraud is double spending. The validation process makes sure that the inputs on the transactions exist and that they have not been already spent. The transactions marked as invalid are rejected by the network and do not make it on the blockchain.

The consensus mechanism is designed so that all the nodes in the network can agree on the set of transactions to be included in the current block. It shifts the authority and the credibility required by a central clearing house to a network of nodes. Important to mention here that, inherently, the nodes do not trust each other, and they do not have to because the trust is enforced by the consensus mechanism itself.

The Proof-of-Work

We mentioned above that one of the key innovations that make cryptocurrencies possible is the decentralized consensus mechanism. Currently, the de facto consensus mechanism is the proof-of-work.

The proof-of-work consensus mechanism was proposed by Satoshi Nakamoto in 2008 [NAKA]. In general, a proof-of-work is a piece of data that requires costly and time consuming computational effort, but it is very easy to verify. A very good analogy is one of your semester projects in college… it takes you a whole semester to finalize it, but it takes very little time and effort for your supervisor to evaluate and grade it. Similarly, proof-of-work is used by cryptocurrencies for block generation. For the network to accept the block, nodes must complete a proof-of-work which also guarantees the integrity of the transactions included in the block. The difficulty of the proof-of-work is adjusted by the network so that a new block can be generated every fixed time interval. This fixed time interval is characteristic to every cryptocurrency. Most notably, for Bitcoin this time interval is set to 10 minutes.

Satoshi Nakamoto’s solution, emobodied by the proof-of-work algorithm above, achieves consensus on the network without a central trusted authority (hence the name decentralized consensus mechanism).

Enter the Ansible

For the astute reader, it is quite obvious at this point that communication delays between the network nodes have a direct effect on the protocol described above. An important assumption made when the protocol was designed is that these delays are small, and this is why they can be handled by the network when the nodes choose to follow and validate the blocks on the longest blockchain fork. Also, this is the reason why all the nodes in the network can have a say in the network consensus. They all can find a solution for the current block, and they all can be rewarded when they find the solution. The communication delays witnessed by the peripheral nodes will cripple their ability to find solutions and these nodes will not be incentivized to remain in the network. Hence, the segregation of the network effect mentioned earlier.

Unfortunately, given the current technological level of our civilization, we do not have at our disposal a technology that allows us to communicate fast and reliable over the large expanse of space. It would take a hell of a wait time to proces a payment made by a mining corporation in the TRAPPIST-1 system, 12 parsecs within the constellation of Aquarius, to a planetary engineering corporation located in the Sol system… twice 39 years plus the block confirmation time.

Fortunately, the Sci-Fi literature already offers a solution for our problem. For those of you, Sci-Fi nerds like myself, that have already read Rocannon’s World [LEGU] and Ender’s Game [CARD], the Ansible device must sound very familiar. The ansible is a fictional device capable of faster-than-light communication. To word this differently, an operator of such device can send and receive messages to and from another device over any distance with no delay.

Hence, even if only in the realm of science fiction, we will be able to devise a solution for the problem that the future cryptocurrency enthusiasts, living in the outer space colonies, will eventually face.

The Special Relativity Theorem

Before elaborating more on a required upgrade of the network protocol, we have to discuss the special relativity theorem and its implications on how time and distance are perceived in reference systems.

Albert Einstein was awarded the Nobel Prize in Physics in 1921. He received it for his contributions to the understanding of the photoelectric effect, after publishing a paper on it in 1905. At that point his contributions to the understanding of gravity through his theory of relativity were well known, but the new perspective on gravity offered by Einstein’s theory was so controversial that the Nobel Prize Committee members chose to protect their reputation. They decided that it was appropriate to award Einstein the Nobel Prize for “his services to theoretical Physics, and especially for his discovery of the law of the photoelectric effect.”

During his research Einstein attempted to reconcile the principle of relativity with the principle of the constancy of the velocity of light. This attempt led Einstein to the discovery of the special relativity theorem. Einstein’s Gedankenexperiment (thought experiment) with a test subject travelling by train is very well known in the scientific community. If our human subject is walking towards the front of the train with velocity w with respect to the train, and the train is moving with velocity v with respect to the embankment, then an observer on the embankment will measure as W = v + w the velocity of our subject on the train. If instead of our traveller, we consider a beam of light propagating with velocity c, the velocity measured by the observer on the embarkment would be v + c. However, this violates the principle that the velocity of light is constant in any inertial reference system and equal with c. Einstein found a solution for this problem and thus resolved the incompatibility.

One direct consequence of the special relativity theorem is the Lorentz transformation.

Before the relativity theorem, time in physics had an absolute significance, independent of the state of motion of the reference system. However, every inertial reference system has its own particular time. We always have to be told which inertial reference system the statement of time refers to. If you look at your watch, the time you read is the time as measured in the Earth inertial reference system. By the way, the Earth circles the Sun on an almost circular orbit, which means that any body maintaining a constant position in the Earth reference system should experience some centripetal force. However, this centripetal component is negligible compared to the gravitational pull of the Earth. Hence, engineers use this aproximation when calculating satellite orbits.

Back to our original trail of thoughts… the axiom that lays the foundation for the Lorentz transformation states that every ray of light possesses the velocity of transmission c relative to any inertial reference system. That is the velocity of transmission in vacuo (in a vacuum).

If following Einstein’s thought experiments [EINS], the above-mentioned axiom leads to a set of four equations that explain the relation between dimensional and temporal coordinates in two inertial reference systems:

  x’ = (x – vt)/sqr(1 – v2/c2)
  y’ = y
  z’ = z
  t’ = (t – x*v/c2)/sqr(1 – v2/c2)

where:

 x, y, z, t coordinates in inertial reference system K,
 x’, y’, z’ , t’ coordinates in inertial reference system K’,
 v the relative velocity between inertial reference systems;

also, the expression 1//sqr(1 – v2/c2) is known as the Lorentz factor.

The Lorentz transformation equations are a more general case of the Galilean transformation:

  x’ = x – vt
  y’ = y
  z’ = z
  t’ = t

which is the basis of the assumptions of the classical mechanics as to the absolute character of both dimensional and temporal coordinates. The above are the result of replacing velocity c with ∞ in the Lorentz transformation equations.

The Lorentz transformation has a few corollaries which have an impact on time dilation, length contraction, relativistic mass, relativistic momentum, and relativistic kinetic energy. The one that concerns us is the time dilation corollary:

 Δt = γΔt’

where γ is defined by

 α = 1/γ = sqr(1 – v2/c2)

Assuming a clock at rest in inertial reference system K, and moving with the velocity v in the inertial reference system K’, the time Δt’ between two ticks as measured in the frame K’ is longer than the time Δt between the same ticks as measured in the rest frame of the clock, K.

Relativistic Effects on Proof-of-Work

To better understand the relativistic effects on how relative time is perceived in two inertial reference systems (or Galilean reference systems [EINS]) we have to mention the twin paradox. The twin paradox is a thought experiment that involves two identical twins. One of the twins makes a journey into space onboard a relativistic spaceship, and upon his return to Earth discovers that his twin has aged much more than he did. Depending on how fast the spaceship moves through space and/or how long the journey was, our traveller could return to Earth and realize that several generations have passed meanwhile.

As a direct application of the twin paradox, we have Mazer Rackham, International Fleet Admiral and Ender Wiggin’s mentor, as portrayed by Orson Scott Card [CARD]. The fighter pilot who destroyed the Formic Fleet Flagship, killed the Hive Queen, and ended the Second Formic Invasion, Mazer Rackham is sent on a journey on a relativistic spaceship and returns to Earth 100 years later in order to assist the International Fleet. Upon his return he has barely aged a few years.

Similarly, if a subnetwork is moving relative to the rest of the network, the nodes in the subnetwork would experience time dilation as predicted by the special relativity theorem. Hence, they are at a disadvantage compared to the rest of the nodes because they have less time available to find a solution for the proof-of-work problem. In order to make thing square for all the nodes in the network, the difficulty of the problem should be adjusted in each subnetwork using the Lorentz factor: in order to compensate for time dilation, the relativistic nodes should have to solve a simpler problem. Also, their block confirmation time should decrease as well.

One element is missing still… how do ansibles synchronize? What value has the Lorentz factor for each one of them? One simple solution for this problem is having one of the ansibles (let us say the Earth-bound one) broadcasting a beacon every fixed number of seconds. The ansibles interfacing to subnetworks would pick up the broadcast, and by measuring the time interval between the beacons, infer the Lorentz factor. Once the Lorentz factor is determined, the nodes mining on the subnetworks will have their proof-of-work difficulty and the block confirmation time adjusted accordingly.

A more Down-to-Earth Solution for the Problem

We will fork the storyline (pun intended) at Enter the Ansible paragraph, and suggest a more down-to-earth solution for the problem. The peripheral nodes are not to participate in the consensus. Their function would be just to relay messages (a.k.a. payment information) across the network. In our multi-planetary scenario, the nodes working on the proof-of-work would reside on/around the Earth, and the network nodes on Mars or any other region in the Sol system would have to wait 2x (network packets travel time) + (block confirmation time) in order to confirm a payment. However, such a solution would have an Achilles heel very hard to defend… if starting its mining engine, any peripheral node could take over the local network and force a topological fork of the network. For a node, the incentive of staying honest on the current network must be stronger than any reward obtained by cheating.

Concluding remarks

While the blogpost contains some elements of fiction, it accurately describes the cryptocurrency ecosystem, the special relativity theorem, the Lorentz transformation and its corollaries. We would like to thank the readers that went through this exercise with us. Undeniably, cryptocurrencies are disrupting the global economy and they are here to stay. Undeniably, we — as a species — will explore and migrate farther than the LEO. History has taught us that decentralization is the key to survival and prosperity. Let us together make the future happen.

References

[ANTO] Antonopoulos, Andreas M., Mastering Bitcoin, Second Edition, O’Reilly Media, Inc., June 2017;
[CARD] Card, Orson Scott, Ender’s Game, A Tor Teen Book, 2014;
[EINS] Einstein, Albert, Relativity, The Special and the General Theory, Princeton University Press, 2015;
[LEGU] Le Guin, Ursula K., Rocannon’s World, Harper & Row, 1977;
[NAKA] Nakamoto, Satoshi, Bitcoin – A Peer-to-Peer Electronic Cash System, www.bitcoin.org, 2008;

 

  • Facebook
  • Google
  • Slashdot
  • Reddit
  • Live
  • TwitThis

 

Credits: SpaceX

 

 

 

Disruptive technology is a very bizarre (and scary) concept, but it is not a bizarre or scary idea. The concept was introduced by Clayton Christensen. In one of his books, The Innovator’s Dilemma, The Revolutionary Book That Will Change the Way You Do Business, Christensen proves that, under certain circumstances, companies that do things right can lose their market share or even get out of business. He also presents a set of rules that can help companies capitalizing on disruptive innovation.

 

While I am not trying to give a lecture on economics, I would like to understand how to apply (if possible) the principles of disruptive technologies to the space industry. A very good example is quite at hand… SpaceX.

 

 

We can start by defining the key concepts: sustaining technology and disruptive technology. These are the textbook definitions: A sustaining technology is a new technology that improves the performance of established products, the performance being perceived along the dimensions that mainstream customers value. A disruptive technology is a new technology that brings to market a radical value proposition. They underperform products in mainstream markets, but they have features that are valued by some customers.

 

What is not obvious is that even though disruptive technologies may result in worse product performance in the short term, they can be fully competitive in the same market in the long run because technologies tend to progress faster than market demand.

 

Now let us see what are the 5 principles of disruptive technologies (as defined by Clayton Christensen):

Principle #1: Companies depend on customers and investors for resources (at the end of the day, the customers and the investors dictate how a company spends its money).

Principle #2: Small markets do not solve the growth needs of large companies (large companies wait until small markets become interesting and to enter a small market at the moment when it becomes interesting is often too late).

Principle #3: Markets that do not exist cannot be analyzed (there are no established methods to study or to make predictions for emerging markets, as there is no data to infer from).

Principle #4: An organization’s capabilities define its disabilities (we all have our blind spots).

Principle #5: Technology supply may not equal market demand (as established companies move towards higher-margin markets, the vacuum created at lower price points is filled by companies employing disruptive technologies).

 

Why do you think established companies fail to adopt disruptive technologies? Established companies listen to their customers, invest aggressively only in new technologies that provide customers more and better products that they want, and they study their markets and allocate investment capital only to innovations that promise best return. Good management is sometimes the best reason why established companies fail to stay atop their industries.

 

And this is why technology startups can fill in the niche… Many of the good management principles widely accepted are only situationally appropriate. Sometimes it is right not to listen to your customers, right to invest in technology that promise lower margins, and right to pursue small markets. This can happen in a small company, a technology startup where big outside stakeholders are not vested, and where new technology development is the big drive.

 

Now that the lecture has been delivered, it is time to ask the questions. Why is SpaceX perceived as disruptive? Is SpaceX really disruptive? In what way?

 

The declared goal of SpaceX is to make space more accessible, that is to bring the kg-to-LEO prices down. If you have a basic knowledge of launch systems, you know that the propulsion technology employed today is pretty much the same used by Mercury, Gemini, and Apollo space programs: liquid fuel rocket engines. The Russian Soyuz, for which the basic rocket engine design has not changed much since the Semyorka days, is a living proof that rocket engineers do not want to fix things that work well. While aerospike engines and nuclear rocket engines make the front page from time to time, the good old liquid fuel expansion nozzle rocket engines will be here to stay for a long time.

 

Given the circumstances, how to bring the manufacturing and launch costs down? As a software engineer who spent a number of years in a software startup, I can recognize a number of patterns… First, Musk knows how to motivate his engineers. Doing something cool is a big driver. I know that. And working on a space launch system than one day may put the first human colonists on Mars must be a hell of a motivator.

 

Modular design… software engineering principles are at work. Build reliable components and gradually increase the complexity of your design. Falcon 9 and Falcon Heavy, are built on a modular design that has at the core the Merlin 1D engine. And an important detail to mention here, SpaceX builds the hardware in-house. Obviously, outsourcing would increase the manufacturing costs.

 

If you are familiar with the Russian Soyuz launch vehicle, you will acknowledge that Musk has borrowed proven (and cheaper) technology for Falcon launch vehicles: LOX/RP-1 as fuel, vernier thrusters, and horizontal integration for the first stage, second stage, and the Dragon spacecraft. These choices simplify the overall design and bring the costs down substantially.

 

To put it the way SpaceX many times did: “simplicity, reliability, and low cost can go hand-in-hand.”

 

One thing to notice is that the most important innovation introduced by SpaceX is in the design and manufacturing process, which is in-house and as flat as possible. Rearranging the pieces of the puzzle can often give the competitive advantage. Lean and mean is the new way.

 

SpaceX is not just trying to bring down the launch prices, it is actually trying to disrupt the status quo… and this makes the battle harder. SpaceX dixit: “SpaceX’s goal is to renew a sense of excellence in the space industry by disrupting the current paradigm of complacency and replacing it with innovation and commercialized price points; laying the foundation for a truly space-faring human civilization.”

 

When developing the theory around disruptive technologies, Clayton Christensen has studied the hard disk drive and the mechanical excavator industries. The US space industry is a different ecosystem. Do the 5 principles presented above need adjustment?

 

Not really. Principle #1 is valid and applies in this case as well. Self-funded SpaceX followed a market strategy not dictated by customers or investors. The small payload launcher market, targeted by SpaceX with Falcon 1 and Falcon 1e, was an area neglected by established space companies as Principle #2 states. Principle #3 explains why established companies have neglected the small payload market.

 

Does mastering the small payload launcher technology qualifies one to enter the heavy launcher market? SpaceX managed to overcome Principle #4. Will SpaceX retire its Falcon 1 launch vehicles and leave the small launcher market for good? In this case, I would see Principle #5 as a warning. While the heavy launchers offer better profit margins, would it be a smart move to leave an emerging market (currently) offering low profit margins? This remains to be seen.

 

 

  • Facebook
  • Google
  • Slashdot
  • Reddit
  • Live
  • TwitThis