Over the decades, research in the medical sector has achieved some wondrous results by utilizing a whole host of novel technologies. One of these technologies is 3D printing, and it is now being used in the fight against diabetes - a disease that causes irregularities in a sufferer’s blood sugar levels. We’re not quite at the point where scientists are able to 3D print entire pancreases for transplantation, but additive manufacturing is being used in some other clever ways. These include the 3D printing of glucose sensors for blood sugar regulation and even the 3D printing of bionic pancreas models for drug testing and discovery. The rise of 3D printed glucose sensors For people that live with diabetes, constant finger pricking and costly glucose monitoring systems are commonplace. These techniques are used to monitor blood sugar levels on an ongoing basis to ensure the parameter stays in a safe range. Finding a workaround for this, researchers from Washington State University have previously employed 3D printing to produce wearable glucose biosensors. The devices can be integrated into straps and gloves and essentially stick to a patient’s skin to monitor bodily fluids like sweat. The sensors were 3D printed using a process called direct ink writing, which allowed the team to 3D print fine lines of functionalized inks. Here, the team used a nanoscale material that resulted in small and flexible electrodes capable of detecting glucose in sweat droplets. Seeing as the 3D printing process is so precise, the material is printed in uniform layers which increases the sensitivity of the sensors. Unlike a needle prick, the devices are non-invasive and were even found to out-perform traditional sensors at detecting glucose. Due to the lack of 3D printing’s geometric limitations, the sensors can be custom printed on a patient-by-patient basis, catering to various needs. A similar project from the National and Kapodistrian University of Athens saw researchers recently 3D printing a ring with identical functionality. Manufactured using FFF 3D printing technology, the ring integrates with standard smartphone software to monitor blood sugar levels from sweat. Compared to traditional manufacturing processes, the 3D printing of biosensors has the effect of reducing waste, all while cutting manufacturing costs and improving the accuracy of the glucose monitors. 3D printing of the pancreas The other major way in which 3D printing is being used to fight diabetes is with 3D printed pancreas models. Just this year, a number of academic partners across Europe, including the University of Naples and ETH Zurich, came together for what is called the ENLIGHT project. The novel project aims to 3D bio print a living model of the human pancreas. While this may sound off-the-wall, the intent here is to improve the testing protocols for diabetes medication. The bioprinter to be used in the four-year study is currently being developed by a company called Readily3D, a specialist in volumetric 3D printing technology for medical applications. Using the bioprinter, the project partners will fabricate pancreatic tissue structures at high speeds before adding signalling molecules to the 3D-printed pancreas models. These molecules will enable the models to simulate how a real pancreas might act when exposed to a certain stimulus or chemical. What makes this so important? With access to fully-functional 3D printed pancreas models, researchers will no longer need to resort to animal testing the test the efficacy and safety of new experimental drugs. The development of the models will also mean laboratory tests can be performed to determine which candidate medication is the best one for a specific patient, sparing diabetes sufferers a long search with unpleasant side effects. As a bonus, it will also save on treatment costs and may speed up drug discovery with fewer ethical dilemmas in the way of testing. The future of 3D printing in the fight against diabetes After asthma, diabetes is the most common chronic disease found in children worldwide. The condition affects an estimated 34 million Americans. Despite this, diabetes treatments haven’t really progressed all that much when compared to other areas in the field of medicine. This is why it’s imperative that novel technologies like 3D printing are being used to great effect. With multi-million dollar projects like ENLIGHT being funded by innovation grants, there is much hope for novel drug testing, and researchers worldwide are putting their 3D printing skills to use with glucose biosensors. The next step is what most would consider the holy grail of 3D bioprinting - a transplantable pancreas. While probably still decades away, a patient-specific bio-printed pancreas would solve the biocompatibility issues associated with transplants, paving the way for a diabetes-free society.
Electrical Energy Storage using Synthetic Biology – Is there Potential?
With the rise of renewable energy, the world faces a new problem to solve. During the years of the fossil fuel monopoly, energy storage was not a problem, since nature itself was responsible for providing these deposits. In contrast, the strategy to be followed with most renewable energies is different: the aim is to capture mechanical and thermochemical energy through electrical conversion. We have the materials to manufacture electrical energy from natural elements. Now we just need to find a way to store this energy without significant environmental or economic loss. Microorganisms may have some things to say on this matter. There are some unicellular organisms called extremophiles. Despite their apparent simplicity, they have managed to colonize niches close – from an anthropic point of view – to the limit of life-sustaining chemistry. The most famous examples are acidophiles, which are usually also thermophiles, although there are others such as barophiles (cells whose membranes withstand extreme pressures at the bottom of the ocean) and electrophiles (they use electricity to grow). It is well known that there are electrogenic bacteria and archaea, capable of modifying their chemical environment to make electricity. It is worth remembering the nature of electricity itself. The nerve impulses of any higher animal are no different from a wire. The action potential travels along the myelinated axons in a manner analogous to that of a toaster wire. That electronic current advances toward the point of least resistance, or highest electrical potential. Therefore, we are dealing with a ubiquitous form of energy wherever there are chemical reactions in fermionic matter. In the same way, microorganisms take advantage of these changes in the electrical gradient generated by the selective permeability of their membranes to certain ions to nourish themselves or interact. One application of Geobacter sp., for example, is the bioremediation of heavy metals in contaminated soils. Apart from these functionalities, it should be noted that the greatest example of renewable energy recycling is found in photosynthetic biology. They are capable of recover “4,000 EJ yr-1 (corresponding to an annually averaged rate of ≈ 130 terawatts (TW))”1 from the Sun. This is estimated to be about 6 times more energy than the annual consumption by human society, which is about 20TW. Therefore, it is not unreasonable to think that the same organisms that store solar energy can also receive input from other renewable energies such as wind or even non-renewable energies such as nuclear. However, the performance of photosynthesis is not perfect. One of the causes is that photosynthesis has evolved so that carboxylation and assimilation of sunlight occur at the same time in the same cell (or reaction container). Attempts to overcome this mismatch between CO2 fixation and water photolysis are called re-wired carbon fixation or microbial electrosynthesis. Finally, long-range electronic transport would be required. An interesting model is known as SmEET (solid matrix extracellular electron transport). It consists of three pillars: electron transport from the electrode to the cell surface, electron transport from the membrane to the cellular electronic transport chain, and the formation of reducing agents that will be incorporated into CO2 fixation. The problem is that no known organism should be able to perform rewired carbon fixation and SmEET at the same time, thus establishing a new area of work for systems biology and genetic engineering. In a nutshell, we are talking about a living solid matrix system, called electroactive biofilm, connected to electrodes. An intelligent reaction would be to question the capacity to host electrical energy in this system, or it can also be expressed as the maximum conductivity of the biofilm. Different calculations and estimates have been made, ranging from 5 x 106 S cm-1 to 5 x 106 S cm-1 at 30°C. One aspect often forgotten during the review of this topic is that carbon-containing macromolecules can be custom designed for energy accumulation. A good candidate would be natural co2 binders that accumulate co2 in the form of plastic (polyhydroxybutyrate, PHB). The best bacterium studied so far is Ralstonia eutropha, capable of producing 15g of PHB per liter per hour. To access this energy, it can use its own oxidative metabolism and its release to external electron transport. The Future This is an uncertain technology since the challenges to be overcome are still very great and there is no guarantee that they can be solved. This means that there are no start-ups betting on this winning card yet. After all, one thing must be very clear: the capacity of biological systems to capture energy of any kind and transform it into other more stable types of energy, such as chemical energy, is unique. On the other hand, and although it may seem like something out of a science fiction movie, it has been experimentally proven that Geobacter sulfurreducens is capable of supporting adhesion to electrodes while performing its normal electrogenic metabolism, so they can be classified as biological micro-batteries. Apart from this, a large-scale energy storage system requires incredible discoveries in the field of systems biology, an efficient and low-cost long-range electronic transport model that is simple in design, safe and effective, and above all gene editing of the heterologous microorganism that will express the carbon-fixing apparatus. This whole idea is likely to be greatly enhanced when nanotechnology joins the ranks of scientists trying to solve the emerging problem of renewable energy management.
Decoding the Potential of DNA as a Digital Data Storage Systems
Many great scientists have argued throughout history about what living things are. Until not so long ago, minerals were considered the third kingdom of life, as they grew in the eyes of the experimenter from tiny particles to perfectly ordered crystals. The growth and reproduction of the characteristics contained in a body is therefore a fundamental basis of living things. Of course, mysticism gradually gave way to scientific evidence during the 19th and 20th centuries. Eventually, they would find in the nucleus of cells a slightly acidic mass, which seemed to be important for the cytoplasmic maintenance since if they removed it, cellular functions would soon cease. It was not until the joint discovery of Rosalind Franklin, Watson, and Crick that DNA was understood as a chemically stable tangle that was capable of encoding all the biological information that makes up an organism. These strands were also accessible by the proteins they encoded and performed impressive molecular functions, such as the duplication of the information contained, its maintenance and protection against mutations, and the regulation of gene expression in response to environmental stimuli. This invention of evolution is so perfect for storing information that, if we think about its origin, we will soon realize that since the appearance of the first modern DNA molecules on earth no physical or chemical agent has spoiled the information they contained, perpetuating, propagating and updating itself automatically for millions of years. We ourselves are heirs to this marvel of natural engineering, of the first self-replicating strands. Parallel to the development of molecular biology, humans have also developed a system for perpetuating information, albeit of an abiotic nature: computation. It is in the progress of these two disciplines that the irremediable gaze of scientists and engineers turns to the same point where their paths converge. In the search for more efficient and compact storage technology, it seems that engineers cannot compete for now with the evolution of primordial chemistry. The advantages are many apart from the small size and stability. The most obvious and tempting is the encryption capability, which would move from a binary system of zeros and ones to a quaternary system, with the four nucleotides adenine, cytosine, guanine, and thymine. Moreover, molecular engineering allows us to be imaginative in this matter since we can devise a system using more than two pairs of nucleotides. The chemical structure of DNA can also be modified to suit our interests; a good example of this is the manufacture of morpholinos, DNA molecules that are reinvented or based on the structure of DNA without having the same composition. However, there are still some negative aspects that should be taken into account. Encrypting the information in DNA is relatively easy, you just need an encryption pattern with which to read the nucleotides in a particular direction. Things start to get complicated when it comes to reading this “codex”. State-of-the-art DNA sequencing technologies - such as illumina or the Oxford Nanopore - cannot read entire DNA molecules, only more or less short fragments. If you think that it is enough to put together the pieces that have been sequenced, you are not entirely wrong, but it is more complicated than that. It turns out that to read DNA you can't do it with just one molecule, because you need to have enough concentration of molecules to be able to do the sequencing reaction. There is always a prior step and that is the amplification of our encoded DNA by the now famous PCR (or polymerase chain reaction). As you can imagine, sequencing forms a rather intricate amalgam of puzzles. It should also be noted that these processes, including sequence reading, have an error rate and the code can be compromised and it could take several weeks before we get the information. Despite these serious drawbacks, Yaniv Erlich and Dina Zielinski published on March 3, 2017, a reliable method that avoided errors in this encoding and reading called DNA Fountain, as well as having been able to "store a full computer operating system, movie, and other files with a total of 2.14 × 106 bytes in DNA oligonucleotides and perfectly retrieve the information from a sequencing coverage equivalent to a single tile of Illumina sequencing". Since then, many private initiatives have been seeking to perfect this process. One of the most promising companies is Catalog, founded by two MIT scientists, which aims to be the first to commercialize this type of storage. Another interesting start-up is Evonetix, which has focused on enhancing the read length of DNA strands. In synthetic DNA manufacturing, Kilobaser is making DNA "printers" for around $9k. There is also much interest in DNA storage in vivo, as the cellular medium is ideal for maintaining it. The Future The tendency to optimize the processes occurring in nature is inevitable. The same is true for industrial processes by humans. The future looks bright for DNA storage, at least in the short and medium term. While we cannot venture that this system is here to stay and prevail, since the upper limit for DNA information encoding has been calculated by weight (4,606 × 1020 Bytes/g) and volume (4,606 × 1017 Bytes/mm3), and although it is quite a lot, perhaps a better way of compressing information without redundancies will be found. What is certain about the future of information is that the demand for its storage will grow, since it is natural to want to preserve it: the very essence of nature tells us so.
Algae Based BioFuel - A Promising Alternative or a Distant Myth?
The ills of the current energy model are becoming increasingly evident in many environmental and health areas. Since humans became aware of the potential of fossil fuels, there has been no real profound change in how we use them. It is true that we have learned to store electrical energy and even developed fission nuclear power, and that we are using less coal and more renewables, but numerically we are still dependent even today on the remaining oil reserves. It is estimated that in just 40 years these pockets of crude oil will be exhausted. This is where future alternatives must take advantage of the situation. One of the most interesting from an environmental point of view is algae-based biofuel, a fossil fuel that is not only renewable but also CO2-neutral. How Does it work? The idea is to extract carbon skeletons from living biomass, algae, that grows from the fixation of atmospheric carbon in the Calvin cycle and through the photolysis of water in photosynthesis, giving rise to photo assimilates that will form part of the new biomass in the system. The advantages of using photosynthetic microorganisms instead of vascular plants are many. Both algae growth and photosynthetic performance are much more efficient, they can be grown in a liquid medium that supplies all nutrients and do not require large surface areas to grow. To produce biofuel, the biomass must be subjected to precise temperature and pressure conditions, the most commonly used being hydrothermal liquefaction. All of these thermochemical reactions are scalable, which is first designed in small bioreactors to analyze the performance of various growth procedures, algal lineage or other effects to be tested. From a strict chemical sense, this fuel is very similar to what we can produce in traditional refineries. However, the main advantage is the carbon recovery rate per unit of time. To put it more clearly, it allows us to accelerate a process of incomplete oxidation that would naturally take millions of years to occur under specific circumstances of pressure and dehydration, which ends up lithifying by diagenesis in oil shales or generating crude oil and natural gas, which is nothing more than a mixture of carbonated gases with a greater presence of methane. All these phases and by-products produced in the monstrous geological cycle of organic renewal of our planet have their analog in the manufacture of biofuel. Of course, another fundamental advantage of this form of energy is its cleanliness. Contrary to what it might seem, the combustion of biofuel does not emit more carbon dioxide than was first removed from the atmosphere by algae growth. In fact, it generates slightly less, because the reactions have by-products with a certain carbon content - the yield is not maximized. What are the Challenges? Some of the challenges still to be overcome for biofuel to become a serious alternative in the market include improving overall process performance. To begin with, many by-products are generated in various states of aggregation (gas phases, liquids, complex solid mixtures). Their recirculation is possible in most cases, although a certain fraction of the solid waste still needs extensive study to make the process energetically profitable and environmentally safe. There are many initiatives using the slogan of the circular economy that try to add unparalleled value to this technology, which is presented as one of the best transitions to an energy model based on nuclear fusion, which will be the irremediable scenario for humanity from 2050 onwards. Moreover, the by-products of thermochemical reactions will be varied and will not only contain carbonaceous skeletons. Living things are composed of a multitude of different macromolecules, the most abundant by weight being lipids and proteins. After extraction of the biofuel, the amount of lipid derivatives in the residues is depleted, so complex nitrogenous mixtures will be part of this necessary recirculation. From here, chemicals can be extracted for sale to other industries, such as cosmetics. In some famous start-ups, we can see how they have occupied different niches within biofuel precisely to respond to this need to recycle by-products. Manta Biofuel uses HTL, NeoZeo converts biogas to biomethane, Enerkem transforms waste into biofuel and chemicals. Another necessary action to further enhance the rate of carbon sequestration is to improve the genetic background. The production of this biofuel depends directly on the concentration of lipids in the cells and the proportion of unsaturated fatty acids (up to 12%). It also depends on the CO2 administered during the scale-up phase. In fact, it will depend on more unknown factors. One way to significantly improve performance is to discover these aspects through experimentation, use artificial intelligence software and systems biology to understand which genes or metabolic pathways need to be modified, and edit the genome of our pre-selected strain, without disturbing the homeostatic algae physiology. In the near future, we will start to see interesting start-ups in this area. It has been on the table at universities for the last two decades. The Future Although it is true that biofuel is still far from being a tangible reality in the wholesale market, interest in this product has not ceased to grow in recent years, mainly due to the environmental catastrophe and, above all, to the economic crisis that would entail the gradual depletion of oil reserves without the existence of a substitute with similar energy qualities. On the other hand, renewables and fission nuclear will play an increasingly leading role until humanity's best hope, fusion nuclear, becomes operational from 2050 (according to the ITER roadmap). It remains to be seen how things will develop, but it certainly feels inevitable in the mid-term that biofuel will become an energy safeguard.
With how far 3D printing has come to date, the applications of the technology seem to be endless. As well as the conventional headliners such as aerospace, medical, and automotive, companies and researchers alike are now using 3D printing to produce functional energy storage devices such as batteries. Lithium-ion batteries in particular have become essential to modern human life, as they power everything from mobile phones to aerospace navigation systems. As a result, there is an abundance of research in the field of electrochemistry aimed specifically at improving the performance of these batteries. This includes improving their capacities (energy densities), making them smaller, making them more affordable, and increasing the rate of charging. The development of higher-quality batteries also has major implications for climate change as they can help promote the use of renewable energy sources, ultimately reducing our dependence on fossil fuels. Unfortunately, applications such as electric vehicles and city-scale renewable energy storage are still limited by the energy densities and charging rates available to us today. This is where 3D printing can lend a helping hand. Taking Li-ion batteries to the next level The most recent example of the application comes from the California Institute of Technology, where scientists developed a novel method of 3D printing lithium-ion battery electrodes. Leveraging DLP 3D printing technology, a visible light-based form of resin-based SLA, the team was able to fabricate carbon and lithium cobalt oxide structures to be used as anodes and cathodes respectively. For reference, DLP 3D printing inherently allows for high-resolution 3D printing, meaning the team was able to manufacture high-precision complex electrode geometries with sub-structures at the micro- and nano-scale. This had the effect of increasing the mass loading of the custom electrodes, meaning the devices delivered higher capacities with faster charging rates when compared to conventionally manufactured counterparts. Commercializing 3D printed energy storage It’s not just the academic space investigating the application either, as Swiss battery technology firm Blackstone Resources recently began 3D printing its own Li-ion solid-state batteries. The company eventually plans to penetrate the electric vehicle market and has already used its proprietary inkjet-based 3D printing process to prototype and test an initial set of battery cells. Beyond just printing electrodes, Blackstone is actually producing entire solid-state batteries which substitute out the liquid electrolyte altogether. The technology was developed to rival the conventional battery production lines of today and is intended as a much more flexible and cost-effective alternative. While traditional production lines are very specialized and can only manufacture one type of cell at a time, the Blackstone process can achieve a wider range of cell formats while eliminating the use of harmful solvents altogether. Of course, the technology also comes with its performance benefits. The company’s prototypes have been used to deliver energy density increases of around 20% when compared to similar conventionally manufactured batteries. 3D printer manufacturer Photocentric has also been dabbling in the market space recently, as the company recently launched a new division focused specifically on developing eco-friendly 3D printed electric batteries. Again targeting the automotive sector, Photocentric hopes to use its VLP 3D printing technology to facilitate the production of low-cost car batteries. The project stems from the fact that today’s battery cells are too large, too heavy, and quite simply not optimized for automotive use. As such, vehicle designs can sometimes be influenced by energy storage device availability rather than suitability, which should come first and foremost. Using its technology, Photocentric has the aim of striking a deal with Musk’s Tesla at the upcoming UK-based Giga factory. The company has already expanded its team of scientists to develop these battery cells and hopes to drive the future of environmentally-conscious production with optimized batteries for the automotive sector. Where are 3D printed batteries headed? Much like many of the novel applications we see floating around, the 3D printing of batteries is very much in its infancy. This isn’t to say the application doesn’t have potential, however, as companies like Blackstone and Photocentric are leading the charge with experimental research and development. 3D printing such devices can have major benefits for supply chains as well as product performance, and the sooner companies in the 3D printing space realize this, the sooner it will take off.
The insurance industry has historically had many challenges, including an overabundance of data, manual computation of each transaction, and long, boring work hours for employees resulting in slow service. For all of these problems, exists one solution: Robotics Process Automation (RPA). What is an RPA? Traditionally, insurance companies either computed every transaction and its data by hand, or by bulky and unreliable APIs. These APIs produced scalability issues and requires expensive technical support, which discouraged insurance firms from upgrading, making their processes slow. The advent of RPAs erased the need for bulky upgrades or expensive remodels to the software that works to run the insurance industry smoothly 24/7. But what can RPAs really do in the Insurance industry? Insurance firms have a complex workflow that, if not optimised, can get extremely inefficient very fast. RPAs can be used in many places within the workflow, such as processing claims, keeping track of data, and even legal regulation. Optimising these smaller processes can cause the efficiency of an insurance firm to multiply exponentially. Underwriting This is the process of verification. In order to issue insurance to a client, the insurance company must first verify the client’s assets and identity. It’s a painstakingly long process involving much paperwork. Instead of manual labour, an RPA can complete the task that would typically take weeks in a matter of days. Underwriting RPAs are typically searching RPAs which scour through databases in order to find information. An employee would still be required to manually check the results, however, the process is still exponentially optimised since the employee only has to deal with relevant data. An added bonus is that RPAs can be coded around privacy laws. Therefore, accidental breaching is impossible. Regulation Insurance companies are very tightly regulated by federal laws and standards. These regulations are constantly changing, and it can be hard to keep up with them. Instead of changing the human workflow every time a law changes, it is much more efficient to have a coded software robot do the job. Just as RPAs can be coded around privacy laws, they can be programmed around all regulations that their particular task is subject to. This makes changing with regulation much easier since it only requires changing some lines of code as compared to changing entire departments around the new regulations. Transaction Processing Claiming insurance can be such a hassle. Thousands of papers, hundreds of phone calls, and it still takes months to process. A large insurance firm can get over a million claims a day. In a situation like this, human paper processing can turn out to be extremely inefficient. That’s where RPAs step in. In this case, RPAs can not only help collect the data and the documents but also help to organise them. Although human supervision is still required, an automated process can boost the efficiency of claim processing by up to 75%. Managing Data As a large insurance company with millions of clients, there are mansions full of data to manage and process. Moreover, due to being tightly regulated, insurance companies cannot afford to have any misinformation in their databases. In a situation like this, an automated process can be invaluable. Data managing RPAs often work with the very root of the data - data entry. If the data is entered correctly, the rest of the work becomes much easier. This is why instead of bulky APIs analysing the transactions after they happen, RPAs record the data in real-time. This not only saves time but also greatly improves accuracy. Scalability The scalability of software is of utmost importance for any company bothering to implement it. In the case of most insurance companies, the software is first implemented on a small process and then scaled up. This scaling is made much easier with automation. RPAs have the capability of growing or shrinking in size based on the need. Moreover, if drastically more automation is required, more robots can easily be deployed in the blink of an eye. Upgrades Technology is an ever-changing field where new upgrades happen every second. In order for insurance companies to keep up with this trend, the technology that they implement now must be scalable to the technology of the future. With the current APIs in use, scalability would mean almost uprooting the whole mainframe and starting again. On the other hand, RPAs provide a much more workable solution with their building-block-like approach. RPAs can be built on top of each other. This allows for more technology to be added as it is made available. Tracking and Analytics An inbuilt quality of RPAs is that they track the efficiency of their own systems along with their human counterparts. This allows the insurance company to track which processes are thriving and which ones still require optimisation. This analysis can prove invaluable for large corporations. Moreover, these bots work 24/7 365 days, so there is nothing that will be left out of the analysis. Where can I get one? There are currently many companies working towards developing and implementing robotic process automation solutions for insurance and banking, however, UiPath and Automation Anywhere are the two leading companies that provide RPA support. These companies not only provide state of the art automated software solutions but also guarantee security and customer support. These companies are not limited to RPAs for the insurance industry. The use cases of RPAs can span to any industry in need of automated efficiency. As technology evolves, RPAs have the capability of replacing much of the human workforce and perform any and all redundant tasks in any corporate environment.
Insurance is part of everyday life. The concept provides not only peace of mind but protection and mitigation to the risks of everyday living. Take into consideration car, life, health, and even natural disaster insurance and suddenly the coverage and amount of claims going through everyday can be daunting. There is also the risk of fraud, changing premiums, and the expectations of life long customers and it becomes clear the difficulty insurance providers are under. Luckily artificial intelligence is making not only the lives of insurance companies easier but their customers as well. When looking at how artificial intelligence is aiding insurance, it’s best to look at how claims are being affected. A claim is any formal request made by the policy holder to an insurance company for the coverage or compensation for a covered policy. The insurance company then validates the claim and once it is approved the policyholder is paid. So how is AI making this better? That can be answered by looking at how AI is increasing the speed at which claims are assessed and solving both new and old problems. Email Overload One of the easiest ways to send in a claim is by email, the only real issue is that a person has to answer each and every email. Depending on the claim and the person answering them this can start to add up, at least it used to until the company Cognizant designed an AI to fix this problem. Cognizant explained that their AI is designed to, “extract policy, claim, and agent-related information automatically from unstructured incoming emails and attachments. The solution integrates our client’s mail exchange server with a cognitive engine on Microsoft Azure.” It was stated that the AI is capable of this due to it’s, “AI-driven self-learning pattern recognition and keyword text recognition to extract data automatically and then respond without manual intervention.” Cognizant went on to say that this was done in order to enable the resigning of human labour to more valuable tasks. Basically, this AI processes a large number of emails and sorts them out while obtaining the most important details from what it's learned from past experiences. It then approves or denies claims in a timely manner. If there is an anomaly or very unique case it relays the claims to an expert to make the final decision. According to Cognizant, this adds up to 60% cost savings, a 50% increase in ability to handle inquiries and six times the increase in the speed of customer service. Fraud Detecting Not only is AI helping to solve an increase in email claims but also helps tackle fraudulent claims as well. Fraud adds up, according to the Insurance Institute of Canada automotive fraudulent claims cost taxpayers 1.6 billion annually. To understand artificial intelligence's ability to stop fraud it's best to look at a 2011 fraud case out of the U.K. where criminals faked 120 car accidents to claim over £1 million in false claims. This was a major problem, but Luca Lanzoni, Chief Information Officer at HDI Assicurazioni and a powerful AI had a solution. When Lanzoni took this issue head on he used an advanced AI system. It was explained that the AI “was able to cross-reference individuals and companies through internal and external fraud records, news sites, credit, and law enforcement databases and sift through claims histories in seconds.” It then found patterns through accident claims and news reports that showed the criminals were crashing similar old vehicles into brand new BMWs in the same ways over and over again. These details were then given to agents to make a final decision, which led to the criminals' arrests. This shows that AI can help insurance agents properly utilize their time, these programs pick up on patterns across large bodies of data at a speed that might not be possible by humans because of the size of it all. An expert still has to go over the patterns to determine if they are suspicious and worth looking into or just a one-time thing. This AI won’t replace employees, they will however help protect the company and the law. Speeding-Up Service Cognizant and Sandilands aren't the only ones to point out the benefit of AI in claims. A New York based insurance company Lemonade has been using AI to process claims faster than ever before. The company has been quick to point out an example of theirs where “the company paid a claim for a stolen $979 Canada Goose jacket in just three seconds.” This may not seem like a big deal but the idea that a larger sum of money for a claim was that easy to process and done at that speed can make a huge difference in someone's life, not to mention bring in repeat business. In a quote from the Financial Times, the CEO of Lemonade Daniel Schreiber has stated that “a bot handling claims in second’s delights customers and crushes costs.” He has also explained that 11%-13% of the premiums paid by customers go to the bureaucracy of handling claims, and that AI cuts down on those costs. This means that people are excited to try out an AI and that the work they do is invaluable to insurance companies that want to save both money and time. In a world market where every penny counts an AI could mean all the difference. What It All Means AI is advancing in almost every field and insurance isn’t any different. What is different is the added benefit of AI in insurance claims. Whereas most companies use AI to make things more efficient, saving both time and money insurance companies are using AI to do that while helping people get the money they need and stopping fraud. AI is literally helping reduce crime, save the company money, and make both employees and customer’s lives easier.
The last big technological advancement in the automotive industry was Henry Ford’s invention of the assembly line in 1913. Over a century later, it is now time for another tech revolution to take place, and this time, robots will take the lead. What are Collaborative Robots? Collaborative Robots, or Cobots, are essentially robots that work together with humans in order to optimise the workflow. They often come in the form of pre-coded arms which fit right into the existing assembly line and take care of the more labour intensive or health-damaging tasks. With the complexity of the vehicles increasing and an ever-growing need for adaptable machinery, collaborative robots could not have come sooner. They provide a means for automotive factories to implement expensive robotic solutions that are scalable among all of the company’s products - the ones existing today, and the ones to come in the future. Generally, there are three main types of collaborative robots: Robotic Arms, Drones, and Exoskeletons. Robotic Arms One of the most fundamental robotic structures is robotic arms. They are versatile and highly adaptable to any task. This is what makes them the perfect collaborative robot for integrating into an assembly line. The key to a robotic arm’s versatility is the degrees of freedom it has. These allow the arm to perform almost any task without external assistance. What makes an arm multifunctional is the ability to change its tool heads. An arm’s tool head can be anything from a laser cutter to a suction cup. These tools, paired with the degrees of freedom, can accomplice almost anything around an automotive factory. Universal Robots is a company that creates robotic arms for assembly line integration in automotive factories. They have arms with varying degrees of freedom, as well as functionalities. These arms can be programmed to do carry out any and all tasks that can be cast into code. Automation companies such as Ford have sensed the way the wind is blowing and already invested in these collaborative robot arms. Drones as Collaborative Robots Along with assembly line integration, collaborative robots can also be used in inspection and surveillance within a factory setting. The menial, yet important, job of surveillance currently occupies a significant amount of the workforce and can be easily automated with drone technology and computer vision. Drones are already being used in many industries as inspection machines, and the automotive factories are quickly catching onto the trend. Drones can be programmed to fly in a search pattern around the factory floor and identify problems through an ML trained camera. Moreover, they help keep the workers safe by flying through the pipes and spotting signs of damage, such as cracks or corrosion. Many major car manufacturers, such as Audi and Ford, have already started using drones are collaborative robots in their factories for doing all sorts of things. They are used for inspection, package delivery from one end of the factory to another, and even to remotely hand out car keys during the pandemic. In terms of the automotive industry, drone technology is just getting started. Exoskeletons Despite the giant recent advancements in robotics within the automotive industry, there are still parts of the assembly that cannot be replaced by robotics. These are often parts that are labour intensive and highly repetitive, not to mention risky. Exoskeletons can solve this problem. This wearable technology is essentially a robotic bodysuit that can help the wearer with various tasks. They were originally invented to aid the old and the disabled with general tasks such as walking, however, the automotive industry is finding a new use for this technology. Exoskeletons are used to enable factory employees to be able to lift heavy loads and complete repetitive tasks with added precision. Their strong shells also protect the workers from injury. ULS Robotics, a Shanghai-based company, is a pioneer in this technology. Their current design enables the workers to lift 44 extra ponds of weight and has 6-8 hours of battery life. Their exoskeletons are currently being tested by large automakers such as Hyundai, Ford, and General Motors. Why Collaborative Robots? Many would see collaborative robots as replacing human jobs, however, one of the biggest advantages of these robots is that they are programmed to do the highly repetitive and often dangerous tasks around an automotive factory. The aim is to keep humans safe and occupied at non-menial jobs. The automotive industry is an ever-evolving market. Automobiles have come a long way and have a long way to go yet. In this market, a large concern is that of scalability. Will the technology implemented today still be useful tomorrow? With collaborative robots, the answer is yes. Since these robots are very simple, universal hardware devices, they can be programmed and modified for future products. Another large advantage of collaborative robots is that they provide scalability and reliability at no additional cost. Since these robots are made for precision, there is no room for error in the process. As vehicles get more and more complex, these robots provide the tools with which this advancement can continue. Upcoming Cobots Cobots are just getting started in the auto industry. Although this technology is still in its research phase within the robotics labs of major automakers, we are already starting to see some of it come into the light. Robotic arms are well on their way to replacing almost every repetitive task within the factory. As more of the technology is perfected within the research labs, it is expected that we will see these arms working hand-in-hand with humans in order to complete some of the most arduous tasks. Drone technology still has much to do with the automotive industry. From in-depth product inspection to becoming a flying toolbox for the factory floor, drones are certainly making their way into the factories. Out of all the technologies discussed, exoskeletons are the most underdeveloped and perhaps the most important of all. It is hard to predict where this wearable technology will go, however, it is certain that it will be one of the next major advancements in the industry.
Allergies are increasingly common pathologies in human society. In the past, it was believed that they were due to an exaggerated reaction of the immune system to a specific antigen or a hypersensitive reaction. Today we know that it is actually the harmlessness of the external agent that leads to pathogenesis, paradoxically. In fact, there is no failure of the immune system, apart from attacking a harmless substance. This ends up damaging the adventitious tissues by a multitude of molecular mechanisms including cytotoxicity. There are many types of allergies and ways of developing them, but in general, we can establish some general guidelines. Repeated overexposure to certain protein regions and the lack of a particular enzyme that hydrolyzes that region usually results in allergy over time. Sometimes it is avoidable, sometimes not, because it has a high genetic component. For example, in areas where olive trees are cultivated, many olive pollen allergy sufferers arise. The fact that these ailments are becoming more and more frequent is mainly due to the contamination of water, the atmosphere, and the food industry. While many additives are harmless or even beneficial to health (such as ascorbic acid, a great preservative) there are other molecules that can interact with our health in a negative way, and sometimes end up developing an allergy. Current Strategies Of course, there are several current treatments to treat allergies. Antihistamines have saved many lives. However, treating food allergies is more difficult. Synthetic biology can help solve the root of the problem. This is the thinking of Ukko, a start-up founded in Israel that adopts two main strategies: designing foods that are normally allergic (such as wheat products that contain gluten) so that they are no longer allergic and designing edible drugs that prevent the immune reaction to the antigen that causes the allergy. The fact is that to accomplish this feat, one can make use of the new technologies available to them. In this case, the difficulty lies in being able to identify the amino acid sequence of the allergens and their three-dimensional structure. A field of artificial intelligence, deep learning, can test millions of structural combinations that make it possible to study the effectiveness of the immune response to the protein. This protein can then be modified in the food-producing organism to make it hypoallergenic, once we have this information. Other strategies are as striking as the construction of a biological microsensor based on mammalian cells. Or put another way, a human cell designed to quantify a circuit of synthetic signaling reactions for histamine release triggered by allergens. This is also a breakthrough in personalized medicine. What is clear is that the pull of artificial intelligence will lead to large outlays by investors in increasingly innovative healthcare strategies. The ability to screen drugs and their respective targets open the door to another model of in silico research. Vaxine is a veteran example of an Australian startup using artificial intelligence to discriminate compounds as adjuvants in vaccines, focusing on treatment for communicable diseases, allergies, and cancer. It should be noted that there are four types of allergy, being the one we are dealing with in the group of anaphylactic allergies. Within food allergies there is a group that affects 1 to 10% of the population in developing countries, although it depends greatly on geographical location and age; it has its origin in an abnormal interaction between immunoglobulin E (IgE) and the epitope of the allergen, i.e. the amino acid surface that is recognized by the hypervariable region of the antibody. This type of allergy is as serious as it is avoidable, since, as mentioned above, it would be sufficient to delete this protein region of the product by means of genetic engineering. A peanut allergy sufferer could eat peanuts again without any problem, as is the case with gluten-free foods. However, the case of gluten is different, since it is not so sophisticated and detracts from the nutritional quality of the food - it is a series of denaturing chemical reactions - and so we are already studying how to eliminate this gluten using synthetic biology. It should be noted that many of these proteins can be substituted for nutritional value, but not for the function they perform. For example, the baking of gluten-free flours is much more costly in terms of energy, since it does not generate these three-dimensional networks together with the arabidoxylans and its kneading is of poorer quality. All these things must be taken into account when designing a hypoallergenic protein substitute, so that it is also able to perform its function relatively normally or even improve the natural functionality (nutritional capacity, higher performance...). The Future New strategies targeting transient genetic modification of T lymphocytes are under study. With a clear understanding that the problem lies in that misguided immune system response, and not in an overreaction as previously thought, we now hold the key to taming food allergy outbreaks once and for all, even predicting future allergies by overexposure to an antigen, whether in food or in another source of contact. Systems biology will play a key role in the coming years, connecting biological circuits with feedback from each iteration studied and driven by AI.
Potential of Prime Editing over CRISPR in Healthcare
Everyone up to date on biotechnological advances has heard of the CRISPR system for editing genomes. But it is also true that there is a lot of fuss about this technology, and not for nothing. The original function of this bacterial system is one of the most ingenious ways for simple organisms to acquire immunity to repeated contact with bacterial viruses. Clustered regularly interspaced short palindromic repeats (CRISPR) are families of DNA fragments separated by recognizable spacers that bacteria hydrolyze from viral genomes and store in their own genome. They thus form a kind of "necklace" containing all the isoforms of viral regions with which the bacteria have come into contact. The beads of this "necklace" will serve as a mold in a new viral infection when there is a new exposure to the viral genome, and a group of endonucleases called Cas will be the enzymes in charge of inactivating this pathogenic sequence, in what is known as the crRNA complex. There are three main types of CRISPR response, with more or less significant differences in the process. The most used for genome editing is type II because Cas9 multifunctional protein is needed here, which grants at least some efficiency and specificity. The fundamental problem of DNA editing is to repair the double strand break without losing information and preserving specificity. A design of CRISPR system could solve it in several ways. This discovery of the bacterial immune system was quickly recognized as the system with the greatest potential in gene editing, both for its flexibility and precision compared to its competition (other systems with endonucleases, such as zinc fingers). However, some studies claimed the occurrence of spontaneous mutations using this genomic editing platform. Although other studies have doubted the veracity of these accusations, the fact is that empirically there were off-target changes that made in vivo editing unfeasible at an early stage. Undoubtedly, the number of citations and start-ups that have been born in the shadow of this innovation of tremendous potential has been exponential in this last decade, year after year. Now, there are so many CRISPR systems that it is difficult to refer to just one. The most promising one, however, is the so-called Prime Editing. It consists of three major elements: a protein domain with a modified Cas9 to cut a single strand (nickase action) and a reverse transcriptase bound to Cas9, a single guide RNA (sgRNA), and an RNA fragment called Prime Editing Guide. This pegRNA has two primary functions: to increase specificity with the genomic target and also serves as a template for the reverse transcriptase. Once the reverse transcriptase action has finished replacing the strand, other endonucleases degrade the original unbound fragment. Now, the modification is only on one of the strands, so a guide RNA must be used again for the Cas9 enzyme to cut on the "healthy" strand. This is sufficient since the cellular mechanisms themselves will repair the DNA strand using the modified strand that we had introduced in the first place as a template. As we have seen, the fundamental problem of in vivo gene editing can be solved with Prime Editing. However, the efficiency of this system is still likely to improve in the coming years, making this technology more and more accessible with fewer off-targets. Some limitations of this technology are the size of the inserts and the number of cells to be modified. That said, the applications are almost limitless, not only for Prime Editing but for the compendium of variations of the present and future CRISPR system. It is estimated that 90% of genetic diseases could be eradicated with current knowledge alone. Moreover, they are not only restricted to direct solutions; there are also indirect ones, by editing the genome of pathogens. One start-up clearly focused on this strategy is Locus Biosciences, which uses the CRISPR-Cas3 system to provide a pathogen-specific bactericidal solution in a complex microbiome. Bioinformatics and machine learning tools are used to design the viral platforms where these CRISPR genes are implanted. Another very interesting application in health is not palliation or cure, but diagnosis using CRISPR. This is what they are doing at Caspr Biotech with a very well thought-out multidisciplinary approach. Affinity is its greatest strength, as it can detect any RNA or DNA sequence. It is also fast, reliable, and inexpensive. The Future Whenever I am asked why it is necessary to edit the genome of organisms, I answer patiently and try to reason with the opposing positions. If we weigh the pros and cons, we will understand that the benefit for humanity is magnificent, and on the other hand, the harm is 100% avoidable if these modifications are carried out with the proper safety protocols. In this context, CRISPR has yet to improve in large-scale genome editing. Together with de novo synthesis of genomes, these are the two approaches with the greatest potential to solve the world's problems. Because not only health but also pollution, the greenhouse effect, and biodiversity loss depend directly on how genomic techniques advance and how they are applied. Of course, CRISPR remains the reigning tool in vivo editing. Ethical barriers are important in a society, but they tend to blur over time. Many companies are tapping into this niche eagerly. It is in our power to train and demand responsible measures from our leaders to encourage these proposals.
With more and more focus being shifted on sustainable energy sources, investment in electric vehicles is at all-time highs. Of course, most of these vehicles are still conventionally manufactured, but OEMs have slowly started to employ the use of additive manufacturing technology in a bid to cut both costs and lead times. Tesla goes additive The biggest name in the electric vehicle market is undoubtedly Tesla. The Musk-owned firm has previously used FDM 3D printing to produce spare parts for the Tesla Model Y. Specifically, it was a Youtube content creator by the name of Munro Live that unintentionally caught the inconspicuous part in a teardown video. Having inspected the HVAC (heating, ventilation, and air conditioning) airbox of the car, a large injection molded component, the engineer making the video spotted what was quite clearly a 3D printed circular part. This was given away by the characteristic layer lines that 3D printing leaves behind. Although unconfirmed, it looked like 3D printing was actually used as a quick fix patch job to address a manufacturing fault in the Model Y. Delving deeper, it is possible that Tesla missed a fault in the Model Y’s HVAC unit and started manufacturing the vehicle before it was fixed. By the time the company’s engineers spotted the issue, it is possible that it may have been too late, with hundreds of HVAC units already manufactured and ready to go. At that point, it is indeed easier to just 3D print a missing part of the vehicle rather than establish an entirely new injection molding production line from scratch, proving 3D printing can have major implications for spare part production in the unlikeliest of scenarios. Smaller manufacturers, bigger dreams It’s not just the big names either, as smaller manufacturers have started relying on AM technology to bolster their own workflows. Engineers from UK-based technology startup Scaled recently developed what they are calling the country’s first 3D printed electric vehicle. The buggy goes by the name of Chameleon, and it has a single seat with a completely 3D printed frame. As it stands, the prototype can muster up a top speed of just 45mph and weighs 150kg in total. It is powered by a Lynch electric motor and also features a number of non-3D printed parts produced by students from a nearby University. At last year’s CES Show in Las Vegas, Swiss automotive OEM Rinspeed also unveiled its take on a 3D-printed electric car. The vehicle, MetroSnap, is currently a concept and was developed in collaboration with 3D printer manufacturer Stratasys. Interestingly, the sustainable vehicle houses over 30 3D printed automotive parts on the interior and exterior. This includes a number of interior consoles, plug socket fixtures, air vents, lidar screens, display frames, and even a licence plate. Finally, we also have German 3D printer OEM BigRep, which revealed its own entirely 3D printed autonomous electric podcar back in 2019. The car goes by the name of LOCI, and it is meant to showcase the application of large-format 3D printing in the creation of functional end-use transportation devices. The 3D printed vehicle was also used to debut the company’s Part DNA technology, which integrates NFC chips into the 3D printed shells. LOCI is home to 14 custom 3D printed parts, and clocks in at 850mm x 1460mm x 2850mm. The largest of the 3D printed components measures 1000 x 600 x 700mm. All the parts in the vehicle were 3D printed using the company’s large-format FFF 3D printers using BigRep’s PRO HT filament. The airless tyres however were 3D printed using TPU polymer. Moreover, the bumpers were fabricated with PLX and the car’s designers chose PA6/66 for the joints in the vehicle. The future of 3D printing in the EV space It’s common knowledge at this point that 3D printing provides a whole host of part customization benefits. The technology is great at producing highly complex geometries at low volumes, which is what a lot of smaller-scale EV manufacturers are looking for. The future will also witness increased collaborations with EV manufacturers and specialised 3D Printing companies. The collaborations would not just focus on design and modelling and would widen to areas of customised tooling, manufacturing, raw material sourcing, pricing etc. We’ve also seen that Tesla, while much larger, is using additive manufacturing to boost its spare part production capabilities. The lead time advantages of 3D printing are nothing to scoff at, and it seems Elon Musk knows this very well. Looking to the future, we can probably expect to see additive manufacturing uptake increase significantly in the EV space, as it has been doing in the wider automotive sector for a number of years now.
3D printing has come a long way since its invention back in the 1980s. The clunky mechanical components of old have been substituted out for refined, slick, and precision machined parts that enable additive manufacturing on a scale never seen before. Regardless, the throughput of the technology is still far behind that of conventional manufacturing techniques, but this is slowly changing. To keep up with an ongoing rise in global demand, the speed at which 3D printers can produce parts has increased drastically, and specialist OEMs and researchers alike are constantly pushing the boundaries of what is possible, both in terms of hardware and software. Leading the charge with VAT Photopolymerization Without a doubt, the fastest sub-technology within 3D printing is VAT Photopolymerization (VP). Digital Light Processing (DLP) and Liquid Crystal Display (LCD) systems use visible light and UV light respectively to cure photopolymer resins into solid 3D shapes, one layer at a time. The simple fact that entire layers can be cured in one go, often in less than around 7 seconds, makes VP synonymous with rapid 3D printing. Leading in the market space, In-Vision, an Austrian developer of high-precision optical systems, recently launched its most powerful 3D printing light engine to date. With over two years of research and development going into it, HELIOS is a UV light projector designed specifically for resin-based 3D printers. The company claims its engine can achieve the highest illumination intensity in the market space, enabling faster cure times and compatibility with a greater number of resin types. To put it into numbers, the engine has the power to deliver up to 60W of illumination intensity, which is around double the current industry standard. This kind of power simply wasn’t around when VP inventor Chuck Hall first worked with the technology in the early 80s. Over in the academic space, researchers from Northwestern University’s McCormick School of Engineering recently developed a novel VP 3D printer that is capable of printing up to 2000 layers a minute. Again, the impressive print speed can be attributed to a high-power light engine, which helps to cure resin at a much faster rate. Interestingly, the system makes use of a six-axis robotic arm - this is something you might see employed by an industrial DED 3D printer or conventional assembly line - rather than the more common Z-axis rail that most other resin printers operate on. By providing the freedom to move, rotate, and rescale each layer as it is being printed, the system also grants a whole new level of design freedom. Advances in Fused Deposition Modelling Fused deposition modeling, or FDM, is the most common 3D printing technology out there, but it's nowhere near as fast as resin technology. This is owing to the fact that the nozzle must scan out the entire volume of a 3D printed part to fill in the shape, rather than just flashing entire layers in one fell swoop. Without the option to use a more powerful light engine, FDM engineers are forced to be a little more creative to boost their print speeds. Taking a fairly unorthodox approach, a company called Ulendo uses software algorithms to increase print speeds on third-party FDM printers. In fact, the company’s technology recently won a $250,000 research and development grant from the National Science Foundation’s America’s Seed Fund Program. The software goes by the name of Ulendo FBS, and it works by modifying an FDM 3D printer’s firmware to improve print speeds by up to 100%, all without sacrificing the quality of the part being produced. The program addresses an issue that has plagued desktop printers for as long as there have been desktop printers - vibrations. Many of the desktop systems on the market today still need to operate at relatively slow print speeds to dampen the vibrations caused by their moving parts. For reference, a typical print speed is around 60mm/s. Print too fast, and you run the risk of part defects and misaligned layer lines, where the printer’s frame has shook itself excessively. At the heart of Ulendo FBS is a vibration compensation algorithm developed to counteract these unwanted vibrations experienced by a moving 3D printer’s frame. The program anticipates when the printer may be about to experience a disruptive vibration and dynamically adjusts its motion accordingly with predictive control models. So, even though FDM technology may be reaching its limits in terms of hardware, the ingenuity of software such as Ulendo FBS can still squeeze out a little more performance. Taking it to the next level The examples covered above are by no means extensive, but they act as proof that decades on, there are still innovations in both the hardware and software realms that push the performance of the technology to new heights. As is human nature, engineers and designers will constantly look for the next best thing, whether that be higher power lasers, improved light engines, or more robust mechanical frames. Ultimately, 3D printing speeds still have a while to go before they can match the throughput of conventional manufacturing technologies such as injection molding and subtractive machining, but that’s not to say they’re not getting there day-by-day.