Jump to content

PB666

Members
  • Posts

    5,244
  • Joined

  • Last visited

Everything posted by PB666

  1. BW design may increase lift per weight, a good thing, but people want speed, adding open prop design generally means operating at 2/3 mach or below, it may also mean you can operate at a higher altitude (with jet engines redesigned for high altitude flight at lower face velocities) because its IAS for critical lift is lower. Where as the mundane 737 operates at 3/4ths Mach. Yes BW plane more efficient but slower. The 747 has one of the highest glide ratios of any aircraft shy of the carbon-fiber variants and it can also go 0.95M if need be. Everything has its place. If you want to cycle over a target for 2 days an aircraft that can sustain altitude of 50,000ft for 2 days with an IAS of say 140 knts that would be a good thing . . . out of missile range, stealth design, at 15 km up its almost invisible from the ground. If you want to fly passengers from LA to NY who are already having to spend 2 hours waiting in line to TSA prior to getting on the CA (a task that people increasing hate) and adding an extra hour or 2 to the flight is not going to make you their best friend.
  2. Starting with Gundam fan sends up a whole lot of flags. The first people who colonize are likely to be elites, the intellectual/military fusion type, but then after they realize is wasteful, meh they would stay on earth and send junior level trainees as their steady. When I was looking for a job out of college I saw a position open for a junior level technician at the research station in Antartica, hazardous duty pay. I thought to myself, despite being jobless not desparate enough to take that. One thing thats not going to happen is to send vagrants into space. The social attitude of the time it to provide rationed assistence to the poor (thats why AcA act is endanger, the ruling party does not want to provide health care to the 'lowest productivity rung' of society, they will not foot the bill for a 5 million dollar rocket trip to a colony, forget that. So that the impoverished that live in space get this way if they because they are the progeny of those who colonized space. This idea is dwindling interest story line. Like one of the 1980s mars movies or Babylon-5.
  3. I ve never heard the moon landing conspiracy from a trained scientist communicated as something that was credible. So thats pretty much one I only hear from people of a certain political persuasion and socio-economic status. Ive heard the occasional political conspiracy theory from scientist but that is generally ethnically based bias, not really broad-scale appeal. I think that scientist distrust the government because of the perceived ignorance of science that flows out of the government. For example, a major presidential candidate denying CO2 effect on climate change. Or the golden fleece award given to some of the most productive science humans have produced. These types of things create skepticism. Another fair reason is the government has greatly escalated the cost of doing science (the so-called unfunded mandates that the r............. were supposed to fix) is actually worse now than ever. The regulations that the government has added either directly or via a draconian NIH funding process has greatly increased cost and hassle of doing business. Don't mind the regulations, but where's the funding. Here's and example if a institute that has a grant from NIH has to follow the NIH guideline, not just for the grant recipient, but the entire organization even if the grant is only 5% of the funds the organization receives. I can give you examples of unfunded or inadequately funded mandates, Patriot Act, Animal Welfare Act, DEA regulations for dealing with Class II and III agents, etc. If you don't see these things, then I have to say frankly, you don't know science. So why are scientist skeptical about the government . . . . . .EXPERIENCE. When people argue the US funding to science has increased, they should know that this is not the case, because the overhead collected by the institution has increased faster, the actual amount of funding the labs receive has declined (cumulatively). This is because the institute either has to deal with the issues when the researchers are there (search and $$$$document$$$$ missions) or after they leave (lab cleanup). The reason that the overhead has increased because of the external demands placed on the institute by the government and regulatory agencies instructed by the government.
  4. As they burn fuel they accelerate, past the speed of sound and past the speed of sound things don't turn well at all.
  5. I think the major point was missed. Let me state it like this Quantum Mechanics creates many caveats most are unexplored. In that context scientist designed quantum computers which with two cubits showed creates great promise, theoretically and disregarding the unexplored caveats issue. Adding more cubits revealed that quantum decoherence is progressively more disruptive, which wipes out theory and now places quantum computing back in the situation that they need to explore more caveats to deal with quantum decoherence. In its current state a perfectly acting quantum computer would be very good at calculating an actual probability distribution given a two state outcome p and q, calculating all probabilities, it would not be very good as a your video graphics processor machine and/or single core CPU playing WoW for 3AM 35-man city raids on a 1080p monitor (which, comparably, is what 99% of the planet really needs there new faster computer for). Theoretically, however, some other QM discovery might allow us to make in-line processing faster, allowing you to say run a raid on 75 inch monitor at 2400 DPI resolution, testing the absolute limits of peripheral vision at say 133 frames per second. For example, some of the new atomic isomers can allow flipping states which might make faster memories or registers. Doubtful but proposed. The creation of single molecule transitors built with scanning tunneling technology may allow single electron processing, which decreases the size, voltage and amperage in CPU (although you still have to find a way to wire it). In space were space flight may take generation and were cosmic radiation is really destructive to electrons, and shielding is quite expensive, and power does not last a long time . . . atomic isomers that very long decay life's are the solution, the problem is that electronics need to operate at critically low temperatures, and to save electricity at very low power usage. So modern low amperage, low voltage devices could provide a solution for the deep space probing. Communication still remains an issue, but provided the ship makes it with some sort of solar panel (again questionable longevity in space) .. of course quantum entanglement allows communication and entangled pairs could be stabilized for say 4000 years (a rather large if and if then) we could have in some future tiny little probes flying out into space radiating back to home the location of habitable planets, which we have no means to ever reach.
  6. Well if you can build a big version of the F22, or any aircraft that has a positive TWR thrust about 20,000 feet you turn to vertical can basically head up at an 80 angle and allowing g to take away your v. When V is low enough fire, go into a dead mans spiral, pull out and reignite your engines, and hope you can stabilize the craft before you land. You could have something like the SR71 with a tank fed aired engine that keeps the engines spooling after critical air pressure falls off on the engine all maybe a few pop out turbines to steer the craft into an anti-stall drive and try to re-ignite the engines. Another is you use something like an F35 and provide 'verticle' thrust after rocket separation and then ignite the rocket. The problem if you study AC is that the heavier the aircraft the harder it is to achieve TWR at high altitude. This is because engines run on air and air thins, as you climb you are essentially loosing the ability to climb faster and eventually. A B777 can without retooling operate to about 48000 feet but without a load, the wings can be modified to fly at a lower IAS (boeing design the plane with a short runway takeoff capability which affords better relative lift a lower airspeeds at the cost of its glide ratio relative to the 747). So it is possible to get some commercial heavies at sub-mach speeds. IIRC at 45,000 feet the coffin corner IAS is something like 220 kts and at FL480 it would be like 190 kts, which puts you in the flap zone with non-fuel loadings. So these speeds (dynamic pressure along the AoA) are not too bad. Theoretically, if we grant horizontal flight, you could push up to maybe 55,000 feet (17km ,at sub Mach speed, still have enough air flow on the air control surfaces) This is going to take away 90% of the atmospheric drag that kills the climb of smaller rockets. The problem is marginal utility of gain. At 30,000 feet 2/3rds of the atmosphere is out of your way. If you launch vertically on Mount Everest by the altitude in vertical flight were Mach is acheived, its not really worth worrying about because static air pressure is so low. If you then go to say 45,000 feet only 1/6th of the atmosphere remains above. Whats the point in going the additional 10,000 feet to gain a .05 ATM reduction when you have allready reduced the ATM by 0.8333. If you are traveling at .0833 Mach, the plane releases and separates the rocket slows down it also needs to turn so that by the time it reaches 55,000 it will probably just be crossing the sound barrier. The marginal utility of gain on altitude starts to drop at 10 kM. I made this point a while back, for smaller rockets where drag really clobbers their ability to reach space. The perfect place to launch is between 3000 and 6000 meter elevation on an equitorial mountain (say in Ecuador). The idea here is that you would have less to slow down at maxQ because maxQ would be at a higher velocity at a higher elevation. Because you have less slow down your rocket operates at its optimal reducing the time and therefore gravities effect. While you gain a little advantage in altitude, the major advantage comes from the reduction of drag losses between 0.8 and 1.3 Mach and remedial measures.
  7. DC-7 was a respectable plane, if it had as many builds as 737 they would have worked out most of the kinks. Although if I could have a plane it would be a 6B, see if I could have a spoiler installed (heh-heh). As long as the old planes weren't horsed you probably could do a minor rebuild on the engines and replace the hydrolic fuel connectors and get them back in the air again. But at 18 cylinders and 2800 kW you really are pushing capacity. The comet was not . . . ., unfortunately for in-wing mounted jet technology, the technology was blamed for the accidents, but in fact it was due to poor fuselage design. Sometime the popularity failure of a nameplate is often due more to perception than reality. DC7 could have been improved. Just take a look at the differences between the 737-300 and the 737-Max, its hardly the same aircraft. The DC8 and 707 were not really great aircraft, but they were the jet-set attraction. The 8 was noisy, it produced alot of smoke, and inefficient. Within a few years after initial production all the engines of aircraft in use were replaced, and even the replacement engines are not efficient by modern standards (would not meat current 1st world airport pollution control standards). The real threat to DC7 usage was the DC9, it could take off on many of the DC7 runways or shorter, it was efficient to run precisely the same routes with about the same number of passengers and make a profit. But Douglas simply stopped DC6 and 7 production although it was widely popular, that immediately makes the future of the aircraft one of obsolete. You could still justify a very low rate of production of the type, even DC3s still have uses, but you won't find any manufacturer willing to produce.
  8. The AMD K6-2 were sold according the how much voltage they could take (and therefore how they could be overclocked) before they blank screened the computer. You could generally overclock the CPU, like on a FIC 503A you could take any chip and 'run them up' sometimes they took it and sometimes they didn't. It shortened the life of the chip. THe intels generally would survive overclocking with a good cooling fan (I overclocked the 166 MMX to 250 mhz and ran it on the lab server for years - actually its life came to an end because of the NT4 server software was judged obsolete). Not so lucky with the AMD K6-2 within 6 years or so all had failed and fortunately I bought a few spare CPU when the value point hit minimum (like 20 bucks for a CPU). The Mobo itself was often the source of the problem.
  9. I think the Wright 3350c was an 18 cyclinder engine that was the primary powerplant on long distance high capacity aircraft before the 707 (I don't count the comet as it was more of a prototype than an actual production model and had a very short life). It was indeed complicated, it was difficult to keep cool and maintain performance at high altitude. Many of the biggest planes like the DC7 used four of them (meaning they had 72 cylinders a bangin away to keep the bird aloft). The largest of which, the R-3350-42WA : @ 2,830 kW (that is 2.8 MW, something to think about when we talk about generators in space that long to produce 200 kW of power, such as for the VAS IMR powerplant needed). A run of the mill airliner from the late 1950s could produce 8000 kW. The 3350c got about 1/6th of its power from recycling engine heat using steam generator. Unfortunately the original design used a DC4 wing, which really was underdesigned for an engine of that power or the added weight it could carry. While the engine is quite beloved there were a number of incidences involving loss of airframe because of the engines. They were quite finicky with regard to resonance, and bad resonance has been blamed on a couple of airframe losses. Due to the higher power they could be flown for small periods of time over the NGO altitude, but this could result in engine overheating and other problems. Now a days if a jet engine goes bad (excepting the concorde) you turn around and return to the nearest compatible runway. If you read the incident report for the DC6 and DC7 you find a number of losses due to catastrophic engine failure. Here are some kerbalesce things to mill over. (remember there were only 343of these produced). http://www.airliners.net/aircraft-data/douglas-dc-7/191, Seventeen DC-7s remained on the U.S. registry in 2010, loss of airframe and life when the number two propeller separated and penetrated the fuselage. loss of airframe and life after an engine caught fire sinking of an airframe a successful water landing in Sitka Sound just before 1 p.m. local time after struggling with propeller problems for 45 minutes write-off of airframe do to off-runway landing after feathering malfunction (might have been pilot error). Another one of these occurred with DC6. In the models I flew, I found it extremely difficult to get DC6 and 7 off the ground when loaded with fuel, their angle of attack is laughably below the stall point for the wing as they come off the ground, the only reason they life off is because of the added ground affect, you gain 5 knts more of IAS and the nose needs to go down scraping those tree tops as you fly by. Landing is equally difficult with fuel. One airframe was lost because a commercial firm overloaded the aircraft, the combined engine and weight of the craft split the wings off the plane on a fly-out of Miami. On the longest range flights, the loss of weight due to fuel made the aircraft rather prone to lift and problems for 3nm:1000 ft FA vector, the spoiler takes care of this problem on the 7. Heres the DC6. (pratt and whitney R2800) 18 cylinder engine was more reliable, but not near as powerful as the 3350c (Var 15A, 16B, and 17B) up to 1900 kW power. A number of airframes were lost due to engine fire. engine vibrations which forced the crew to return to Manaus. On the ground one of the right hand engines burst into flames. The fire spread to the fuselage causing the dea the number three propeller separated and struck engine four, causing the aircraft to break up crashed seven miles off course while attempting to land on the blue ice runway at Patriot Hills airport in Antarctica. There were no fatalities but damage was extensive and the aircraft was written off. Not to many jet aircraft can place this one in their incidence report list.
  10. Lol, giant tube in Uranus. We have the stellorator, the toroid, and now the enima fusion reactor. Lets consider that the rate of acceleration would be so small that the orbit would be elliptical that Saturn or Jupiter would toss Uranus into the grass of outerspace. Someday Earthlings may leave the solar system, I seriously doubt it will be 'we' by any stretch of the imagination. We are currently progressing at highest velocities of about 10,000 m/s per generation and to get to the nearest habitable system would take a reasonable 100 years at 0.1 c (meaning 3x107 m/s) and we are talking about 3 x 103 years. We are doomed unless we dont markedly alter the education and political system on the planet, so don't see us lasting that long. Yeah, lots of fantastic speculation here, nothing much in the way of reality. I think the next big break through in speed will come when we fundamentally understand how quantum space-time resolves.
  11. Don't forget the DC3, DC4 and DC6 (if you can find one). These guys have range, DC3 is basically an any runway lander, but the fuel that can be used is stable down to very low temperatures. The jet aircraft and turboprops are more limited. Caravans are decent going in and out of places like africa but they really are a people mover. I would like to say the 3 and 4 are reliable but I have to make the point that most of them are 70 to 80 years old.
  12. This comes at the problem from a MWI bias. I should point out that copenhagen is the most conservative, and basically grants that there are unsolvables (defining itself as incomplete). MWI creates a mess in its wake that is unneccesary. And pilot wave theory does not know how to reconcile quantum space-time into space-time (which BTW is problematic for all interpretations). The problem I think is in field character, that we confuse consistent experimental behavior with actual proof. The essense of resolution is at the fine scale space-time (or to be more specific at the local level quantum space-time) the 'construct' in which these fields both traverse and compose is not known (you can cap that and put explanation points behind it to fuel its importance). As one scientist remarked, we should stop guessing.
  13. w2r = g. g decreases with altitude but not much. YOu can fly approximately 1000 m/s on a winged vehicle up to about 20,000 meters. Winged vehicles can stay aloft with TWR of 0.1 or less. Once you get to about 1000 m/s with is 1/8 th velocity needed for and assume you are at the equator traveling at 400 m/s you get about 1400 m/s in orbital vector. You then (1.4/8)^2 = 0.036 so you could do TWR of 9.96 and maintain inertia from the climb and eventually you would be far enough away from earth to accelerate.
  14. http://www.bbc.com/news/science-environment-38173002 https://en.wikipedia.org/wiki/List_of_Progress_flights I was looking at this wiki, the vehicle type had relatively good reliability until about 135 (2011) and since then 3 failures.
  15. It is likely that the compound is a complex mixture of naphthyl with other moieties. One example is xylenes in which the moieties can be on different carbons of a benzene ring. It could be composed of 1, 4-methylene naphthyl, naphthyl-ethanol, 2-napthyl-propanol, etc. Since the naphthyl is the only common moiety in the fuel they simply may just call it naphthyl.
  16. Points lines and spaces are idealizations of reality that begin to substantially deviate with observation on the very small scale.
  17. Its pretty bad, because on an ION drive you loose mass and thrust. So overtime the ship gets lighter. For longterm spaceflight assumming Mg ION drive you could have 3 or 4 times the ships weight in feul, you could easily make up for the fact that there is no mass loss by the fact that the ION thrusters are 35 times more energy efficient, and given NTGs are weight and solar panels are weighty also, you would need 35 times as many of these to get the same thrust as ION thruster. I take the solid fuel or even a dense gas over the Cannae drive any day. Where these things might pay off is if you had a shuttle that went between L2-L1 earth mars and back, a hauler where you didn't need much DV reasonably close to the sun that you can have panels, and you were going back and forth. Suppose you were shuttling dry stock goods, like grains, wood or metal, even meat frozen to -50'C which you could hold for years at a time. This type of transport could pay off because it becomes free of cost. BUT - - - - - - - - - - - - - - We still don't know whether that efficiency will remain in space, in space it might drop to 1mN/300kW. I might also add, it might make a good interstellar terraforming drive, you have 1000 years to reach 0.005 C then 1000 years to slow down, meh, the lypohilized bugs won't mind. Yeah, no this is not credible.
  18. The concept here is that if you are going to go interstellar and you are using fusion energy, then your accelerations are going to be limited to the 0.001 to 0.1 a range. Your interstellar travel times are going to be years, you need a big ship. Consequently you need alot of reactors or bigger reactors. 0.6M form factor could maybe run non propulsion systems, but not propulsion systems.
  19. Do points exist. A quantum singularity is a point. So lets break this down into the physical world, so that we have a better misunderstanding of space-time. We should start with the basic. Space-time is filled with quantum space-time. Needless to say this is cool sounding bunk. Space-time is not filled with quantum space-time, space-time is quantum space-time and the filling at the same time, field structure in its composite is the universe. All of these things comprise the energy and by definition the mass, and so they are everything. OK so that is dumb down. Where the dimensionality pops out of this, space and time, are the interactions. The quantum interactions I suppose are transient in nature but guide the structure of space-time by very loosely creating cells in the foam that whose units are quantum space-time. For whatever reason time flows in a single direction on our 'side of the universe' (if time is treated like space), and space-time is a manifestation of the resolution of the foam from the quantum level to the visible level. So a point in quantum space-time has no dimensionality because its the interactions basically amorphous dimensionality. It is through the mass of interactions that dimensions and time have meaning, or to put it otherwise because we can only measure the number of interactions as a composite, but cannot measure individual interactions in the measurement of space and time, that we have a coherent measuring system. If you can imagine the situation as a clouds in the sky. If you attempted to measure two abutting clouds you would have a problem. Suppose you had 100 people measuring the distance between two very close clouds. Your mean distance between the quessed center of the clouds would have a large relative deviation of those distances. Now if you measure clouds km apart the measured distance is a small relative deviation. Now imagine measuring the vector coordinates of many clouds in different layers. After you have plotted cloud after cloud you see layering in your structure and distances and the like you can see motion over time, and relative motion. Just measuring the distance between two clouds no matter how many measurements are taken does not complete the picture. Thus is such with space time, a few measurement of points in quantum space time does not help us, we need near infinite number of measures and then dimenstionality and time appear, but oddly seemingly large numbers needed to make the smallest sense of things is triviality in the quantum world of events. But science still can see quantum events. a single photon decaying in the eye, or registering on a photo-multiplier tube, the problem is that these events often lack dimension. For example we cannot simultaneously detect momentum and position. But the events we see are interactions, generally with a particle with a large number of fields.
  20. In 100 years the Cannae drive will still produce terrible thrust. What are we talking about, resonance orbitals and charge repulsion, operating in space. Consider the rate of mass transfer of charge in space per unit time and the scope of a resonance thruster 3.85 Ghz, about 10 cm. Now then square this 0.01 M, this is our scale (meaning we can fathom say 1000 fold increases, but not billions or trillions). Now imagine how much mass/charge passes across that unit of space withing a second. we are talking ng-ug range even in LEO. And how fast will the drive accelerate these particles in the relevant space. Eventually Cannae needs to prove itself in space, and there we will see that its prolly just a big resonance orbital creating device that has virtual orbitals that stretch at declining frequencies meters from the thruster at most. The tiny amount of hydrogen/plasma that passes (might even not be of a wavelength that will be accelerated) and basically trivial thrust, with refinement they might get it back to the level of thrust seen in the lab, but certainly not N scale thrust per 0.01 M of thruster cross sectional area.
  21. He's right, if you want to wait until 1000m use a twr or 1.4. The problem you will have is that you are going to gain relative acceleration over time as fuel bleeds off. If you hold thrust then your velocity will resist turning. Turning is much easier at a lower velocity. If you wait until you have a high velocity to turn then what happens is that your side-drag becomes lift.
  22. In the field of biologics in means for instance taking the specificity of an antibody, often humanized and optimizing it for more effect or less adverse effects. It could mean, screening on a phage display library binding proteins, and then attaching that to polyglycine and the end of the antibody. It could mean taking a rough domain and building a better peptide that has possible strain specific MHC class II antigen binding sites and can generate more antibody of the desired specificity. Recloning. Since they are non-NIH funded and can operate anywhere they are free to use techniques essentially banned in universities. They generate and test more clones, maybe ten times as many sequence-motif specificities and select 3 or 4, versus 1. Next that can test them on different diseases, 1 may work better for one disease versus another. Of course they have a whole set up for humanization. YOu can make IgG of anyclass, or IgA or IgD depending on what you need. These fall under the category of 'likes', and frequently we have little information about what they are from a primary structural point of view. Etanercept is a like. Its an antibody, but not one you will ever find in nature. Then there are a large number of hormones, cytokines, lymphokines, cognate receptors ect, that are sequence-farmed for their ability to generate decoy drugs that essentially titrate the level of a cross-talk agent in serum or in the environment of the pathology (site of cancer or inflammation). What they essentially do is take a sequence of a protein, maybe 10 to 30 residues, patent it, say any further use of it in is copyrighted. " The prototypic fusion protein was first synthesized and shown to be highly active and unusually stable as a modality for blockade of TNF in vivo in the early 1990s by Bruce A. Beutler, an academic researcher then at the University of Texas Southwestern Medical Center at Dallas, and his colleagues.[2][3][4] These investigators also patented the protein,[5] selling all rights to its use to Immunex, a biotechnology company that was acquired by Amgen in 2002.[6] "
  23. The biologics are considerally more difficult to make than a simple antibiotic discovered in a dish. Often the antibodies start out in one species and end up being humanized (meaning information in one protein is replaced, bit by bit by information in another). There is a considerable amount of know-how and experience that goes into making some of these. First you start with an antigen, It can be a whole protein or a segment of it. But thats actually not where it starts, because published X-ray crystallographic data has probably revealed a binding site or an enzymatic site you are interested in. So the first stage is to immunize into a model species such as Balb/C and generate an immune response. From this point you need to test for specificity and you need to find an assay that will give you the results you want. In the old days once you had specificity you would then create a hybridoma and then use multi-tier limiting dilution to screen the hybridoma lines for the specificities that are most similar to what you want (there is a whole process in that). We don't yet have a biologic, just a cell that produces something. Next the cells are expanded and injected in the peritoneal cavity of mice. The mice are then drained and you have an amount of antibody that you can used for testing in experimental animals. You will be doing this probably on 10 to 50 antibodies. Thats on the order of 50 mice per antibody. Meanwhile your cells are frozen into liquid nitrogen. The fast technique is gone now, pristane priming of mice has been pretty much ban by NIH for any institute that uses NIH money. The antibodies that make it to publication will be given a name like AMT-150 or B6H2F1 (limitiing dilution nomenclature). Then it goes through several rounds of publication and testing in different laboratories. Once the antibody has been tested the human on-variable parts of heavy chain and light chain will be genetically engineered into the mouse variable regions (1 or both chains, depends how the antibody works). This is then expressed in an E. Coli expression system. This whole process again is not one step, frequently does not work the first time, many PCR primers need to be made, remade . . . . .Then from the E.coli expression system you first have to test that it expresses, then you probably will have to move it to a system that optimizes folding and you can begin the first rounding of retesting. About this point big pharm is sniffing at your door, they might purchase here. In may have to be moved to a eucaryotic expressions system, and still further retested. Next you have to absolutely purify if from all microbial contaminants. The Hep B vaccine that was made about 15 years ago had a small amount of contaminant that caused severe autoimmmune disease in some of the recipients. Once its purified you can retest it in a model in-vivo system. Then you have to begin testing in human volunteers for tolerance, off to the prison system you go, but before you do that you have to establish its safe in animals. Next you can start looking, prolly in some country with relaxed protocols whether the drug is effective in humans. In very fatal diseases they may allow you test immediately. In one such test a drug was given and 6 of 12 recipients promptly died. And then the pharmaceutical companies just buy you out, the pay your university most of the price and you prolly get a new lab or a raise. Start to finish, around 15 years, the companies can do more quickly but they often don't publish what they have done, and some of the drugs have been pulled, so . . . . . After the wheel has been invented several times along of feild then big pharm starts making their own product de-novo, but in many cases we hardly know what it is that they produced. Etanercept is an example. https://en.wikipedia.org/wiki/Etanercept
  24. Aside from being troll bait, if you wait long enough you will find a variety of opinions on the topic. Being an assistant to the editor of 3 journals I would say a few things. 1. The quality of research of established science is improving. The quality of the statistics has greatly improved in the last 20 years. In some fields, from using nothing more than averages. 2. If you look at the number of rejected papers, this has greatly increased also, but not do to established science, but the science bomb that has come from India and China. Even these publications have increased quality. 3. Established science is doing more science, better science with less money (relative to GDP) than 25 years ago; however this will not go on indefinitely. The proportions of funding for small labs that is coming from NSF and NIH has declined greatly and the funding % of many NIH institutes might as well be zero, for small to medium sized labs. Pretty much all of the funding goes to large facilities with a dash of political impetus. The nation that invest the most in science will be the leader in the next generation, and China is posing in that position (and they lack many of the myth based hangups that Americans have). Its not about the science it does, its about the techniques it can perform, the people it can train, the machines that it needs to build. A nation that can do these things for its science can also do it for its industries. When you look for instance at the biologic drugs, many of them can trace their origins or a comparable or like drug back to a university laboratory. And many of those invented by pharmaceuticals are simply copying the functionality of the laboratory produced chemical. You see pharmaceuticals touting the money spend on research but for cancer products they simply bought the rights from the university. Their research so-to-speak is promoting the drug for clinical trials and trying to get FDA approval.
×
×
  • Create New...