Thaw could release Cold War-era U.S. toxic waste buried under Greenland's ice

OSLO Global warming could release radioactive waste stored in an abandoned Cold War-era U.S. military camp deep under Greenland's ice caps if a thaw continues to spread in coming decades, scientists said on Friday.Camp Century was built in northwest Greenland in 1959 as part of U.S. research into the feasibility of nuclear missile launch sites in the Arctic, the University of Zurich said in a statement.Staff left gallons of fuel and an unknown amount of low-level radioactive coolant there when the base shut down in 1967 on the assumption it would be entombed forever, according to the university.It is all currently about 35 meters (114.83 ft) down. But the part of the ice sheet covering the camp could start to melt by the end of the century on current trends, the scientists added."Climate change could remobilize the abandoned hazardous waste believed to be buried forever beneath the Greenland ice sheet," the university said of findings published this week in the journal Geophysical Research Letters. The study, led by York University in Canada in collaboration with the University of Zurich, estimated that pollutants in the camp included 200,000 liters (44,000 UK gallons) of diesel fuel and the coolant from a nuclear generator used to produce power."It's a new breed of political challenge we have to think about," lead author William Colgan, a climate and glacier scientist at York University, said in a statement. "If the ice melts, the camp's infrastructure, including any remaining biological, chemical, and radioactive wastes, could re-enter the environment and potentially disrupt nearby ecosystems," the University of Zurich said.The study said it would be extremely costly to try to remove any waste now. It recommended waiting "until the ice sheet has melted down to almost expose the wastes before beginning site remediation." There was no immediate comment from U.S. authorities. (Reporting By Alister Doyle; Editing by Andrew Heavens)

Read more

Study finds cosmic rays increased heart risks among Apollo astronauts

CAPE CANAVERAL, Fla. Apollo astronauts who ventured to the moon are at five times greater risk of dying from heart disease than shuttle astronauts, U.S. researchers said on Thursday, citing the dangers of cosmic radiation beyond the Earth's magnetic field. The study by researchers at Florida State University and NASA found that three Apollo astronauts, including Neil Armstrong, the first person to walk on the moon, or 43 percent of those studied, died from cardiovascular disease, a finding with implications for future human travel beyond Earth.The research, published in the journal Scientific Reports, was the first to look at the mortality of Apollo astronauts, the only people so far to travel beyond a few hundred miles (km) of Earth.It found that the chief health threat to the Apollo astronauts came from cosmic rays, which are more prevalent and powerful beyond the magnetic bubble that surrounds Earth.NASA disputed the findings, saying it was too early to draw conclusions about the effect of cosmic rays on Apollo astronauts because the current data is limited. The results of the study have implications for the United States and other countries, as well as private companies, such as Elon Musk’s SpaceX, which are planning missions to Mars and other destinations beyond Earth.For the study, the researchers examined the death records of 42 astronauts who flew in space, including seven Apollo veterans, and 35 astronauts who died without ever going into space.They found the Apollo astronauts’ mortality rate from cardiovascular disease was as much as five times higher than for astronauts who never flew, or for those who flew low-altitude missions aboard the space shuttle that orbited a few hundred miles above Earth. A companion study simulated weightlessness and radiation exposure in mice and showed that radiation exposure was far more threatening to the cardiovascular system than other factors, lead scientist Michael Delp said in an interview."What the mouse data show is that deep space radiation is harmful to vascular health," he said. So far, only 24 astronauts have flown beyond Earth’s protective magnetic shield, in missions spanning a four-year period from December 1968 to December 1972.Of those, eight have died, seven of whom were included in the study. The cause of death of the eighth astronaut, Apollo 14's Edgar Mitchell, who died in February 2016, has not been released, so he was excluded from the study, Delp said. Mitchell was the sixth person to walk on the moon.Delp and colleagues are working on a follow-up study that includes more detail on family medical histories, smoking and other factors. (Reporting by Irene Klotz; Editing by Julie Steenhuysen and Peter Cooney)

Read more

Fine Tune Your Polling and Batching in Mule ESB

They say it's best to learn from others. With that in mind, let's dive into a use case I recently ran into. We were dealing with a number of legacy systems when our company decided to shift to a cloud-based solution. Of course, we had to prepare for the move — and all the complications that came with it.Use CaseWe have a legacy system built with Oracle DB using Oracle forms to create applications and lots and lots of stored procedures in the database. It's also been in use for over 17 years now with no major upgrades or changes. Of course, there have been a lot of development changes over these 17 years that taken the system close to the breaking point and almost impossible to implement something new. So, the company decided to move to CRM (Salesforce) and we needed to transfer data to SF from our legacy database. However, we couldn't create or make any triggers on our database to send real-time data to SF during the transition period.SolutionSo we decided to use Mule Poll to poll our database and get the records in bulk, then send them to SF using the Salesforce Mule connector.I am assuming that we all are clear about polling in general. If not, please refer to references at the end. Also, if you are not clear with Mule polling implementation there are few references at the bottom, too. Sounds simple enough doesn't it? But wait, there are few things to consider.What is the optimum timing of the poll frequency of your polls?How many threads of each poll you want to have? How many active or inactive threads do you want to keep?.How many polls can we write before we break the object store and queue store used by Mule to maintain your polling?What is the impact on server file system if you use watermark values of the object store?How many records can we fetch in one query from the database?How many records can we actually send in bulk to Salesforce using SFDC?These are few, if not all the considerations you have to do before implementation. The major part of polling is the WATERMARK of polling and how Mule implements the watermark in the server.Polling for Updates Using WatermarksRather than polling a resource for all its data with every call, you may want to acquire only the data that has been newly created or updated since the last call. To acquire only new or updated data, you need to keep a persistent record of either the item that was last processed, or the time at which your flow last polled the resource. In the context of Mule flows, this persistent record is called a watermark.To achieve the persistency of watermark, Mule ESB will store the watermarks in the object store of the runtime directory of a project in the ESB server. Depending on the type of object store you have implemented, you may have a SimpleMemoryObjectStore or TextFileObjectStore, which can be configured like below: Below is a simple memory object store sample: Below is text file object store sample: For any kind of object store, Mule ESB creates files in-server, and if the frequency of your polls are not carefully configured, then you may run into file storage issues on your server. For example, if you are running your poll every 10 seconds with multiple threads, and your flow takes more than 10 seconds to send data to SF, then a new object store entry is made to persist the watermark value for each flow trigger, and we will end up with too many files in the server object store.To set these values, we have consider how many records we are fetching from the database, as SF has limit of 200 records that you can send in one bulk. So, if you are fetching 2,000 records, then one batch will call SF 10 times to transfer  these 2,000 records. If your flow takes five seconds to process 200 records, including the network transfer to send data to SF and come back, then your complete poll will take around 50 seconds to transfer 2,000 records.If our polling frequency is 10 seconds, it means we are piling up the object store.Another issue that will arise is the queue store. Because the frequency and execution time have big gaps, the queue store's will also keep queuing. Again, you have to deal with too many files.To resolve this, it’s always a good idea to fine-tune your execution time of the flow and frequency to keep the gap small. To manage the threads, you can use Mule's batch flow threading function to control how many threads you want to run and how many you want to keep active.I hope few of the details may help you set up your polling in a better way.There are few more things we have to consider. What happens when error occurs while sending data? What happens when SF gives you error and can't process your data? What about the types of errors SF will send you? How do you rerun your batch with the watermark value if it failed? What about logging and recovery? I will try to cover these issues in a second blog post.Refrences:https://docs.mulesoft.com/mule-user-guide/v/3.6/poll-reference#polling-for-updates-using-watermarkshttps://docs.mulesoft.com/mule-user-guide/v/3.7/poll-referencehttps://docs.mulesoft.com/mule-user-guide/v/3.7/poll-schedulers#fixed-frequency-schedulerhttps://en.wikipedia.org/wiki/Polling_(computer_science)

Read more

SpaceX rocket lifts off on cargo run, then lands at launch site

CAPE CANAVERAL, Fla. An unmanned SpaceX rocket blasted off from Florida early on Monday to send a cargo ship to the International Space Station, then turned around and landed itself back at the launch site.The 23-story-tall Falcon 9 rocket, built and flown by Elon Musk’s Space Exploration Technologies, or SpaceX, lifted off from Cape Canaveral Air Force Station at 12:45 a.m. EDT (0445 GMT).Perched on top of the rocket was a Dragon capsule filled with nearly 5,000 pounds (2,268 kg) of food, supplies and equipment, including a miniature DNA sequencer, the first to fly in space.Also aboard the capsule was a metal docking ring of diameter 7.8 feet (2.4 m), that will be attached to the station, letting commercial spaceships under development by SpaceX and Boeing Co. ferry astronauts to the station, a $100-billion laboratory that flies about 250 miles (400 km) above Earth. The manned craft are scheduled to begin test flights next year.Since NASA retired its fleet of space shuttles five years ago, the United States has depended on Russia to ferry astronauts to and from the station, at a cost of more than $70 million per person.As the Dragon cargo ship began its two-day journey to the station, the main section of the Falcon 9 booster rocket separated and flew itself back to the ground, touching down a few miles south of its seaside launch pad, accompanied by a pair of sonic booms. "Good launch, good landing, Dragon is on its way," said NASA mission commentator George Diller.Owned and operated by Musk, the technology entrepreneur who founded Tesla Motors Inc, SpaceX is developing rockets that can be refurbished and re-used, potentially slashing launch costs. With Monday’s touchdown, SpaceX has successfully landed Falcon rockets on the ground twice and on an ocean platform during three of its last four attempts.SpaceX intends to launch one of its recovered rockets as early as this autumn, said Hans Koenigsmann, the firm's vice president for mission assurance. (Reporting by Irene Klotz, Editing by Chris Michaud and Clarence Fernandez)

Read more

1 in 16 Java Components Have Security Defects

Sonatype just released it's 2nd annual State of the Software Supply Chain Report.  Over the past year, researchers amassed a great deal of data with respect to the staggering volume and variety of Java (as well as NuGet, RubyGems, npm) open source components flowing through software supply chains into development environments.  This year, the report assessed behaviors across 3,000 organizations and performed deep analysis on over 25,000 applications.The results we discovered ranged from staggering to surprising to sobering.  For example, researchers measured organizations consuming an average of 229,000 components annually.  The good news is, these components help companies accelerate their development and innovation.  At the same time, we saw 6.8% of components used in applications marked with at least one known security vulnerability — adding high levels of security debt.  Not all components are created equal.In the past year, Sonatype was far from the only organization pursuing the need for improved software supply chain practices.  The researchers studied the patterns and practices exhibited by high-performance organizations and documented how these innovators are utilizing the principles of software supply chain automation to manage the massive flow and variety of open source components.  These organizations are striving to consistently deliver higher quality applications for less, while lowering their risk profile. This year’s report profiles organizations across banking, insurance, defense, energy, technology, and government sectors.The 2016 State of the Software Supply Chain Report blends public and proprietary data with expert research and analysis to reveal the following:Developers are gorging on an ever expanding supply of open source components.  Billions of open source components were downloaded in the last year.Vast networks of open source component suppliers are growing rapidly.  Over 1,000 new open source projects and 10,000 new versions of open source components are introduced daily.Massive variety and volume of software components vary widely in terms of quality.  1 in 16 parts include a known security defect.Top performing enterprises, federal regulators and industry associations have embraced the principles of software supply chain automation to improve the safety, quality, and security of software.If you are developing with Java or other open source components, we invite you to read the report and leverage the insights to understand how your organization’s practices compare to others. If you would like to join a live discussion on this year's report, you can hear from the research team on Wednesday, July 13th. Save your seat here.

Read more
Older Post