Thursday, September 15, 2022

Not your usual dehumidifier, an Addendum

I ended my previous post discussing two different approaches that combine an M-Cycle(-ish) style chiller with a liquid desiccant dehumidification system.  One thing I failed to mention regarding the "bootstrap" approach, where the input air to the M-Cycle-Like (hereinafter called the MCL???) is dehumidified using LD, is that, if it works, it should output water that is chilled below the ambient-air dewpoint.  Simply because the water content of the input air is lower.  It remains to be seen if the end result justifies the added complexity of such a system.

The extra-cold water coming out of such a chiller might extract water from interior air to help dehumidify it -- but only if the inside heat exchanger is allowed to cool below the dewpoint.  Since we're running warm interior air through the HX I wouldn't count on it but, since up to this posting I haven't done anything other than make and characterize a plain-vanilla "swamp cooler" style chiller, who knows for sure.  I don't.  

I sort of want it to get cold enough, but don't at the same time, because if it DOES get cold enough to condense water I will need to add a way to take care of the water, rather than let it drip on our expensive wood floors!

Monday, September 12, 2022

Not Your Usual Dehumidifier

 Early in my quest for a DIY A/C system that might actually work in our (often) humid summers I came across a couple of youtube videos produced by Tech Ingredients that led me down an interesting path.

The first one, link here, introduced me to the idea of liquid desiccants.  It used liquid desiccant (LD for short) to pre-dry air that is cooled by flowing through an evaporative cooler.  It was fairly complex, using a second evaporative cooler to cool down the hot and regenerated liquid desiccant (more on this later in my post).  The second one, link here, is a system they built that was (hopefully) sized for a real-world application but didn't work all that well, possibly due to poor efficiency of their chilling tower and desiccant-solution tower.  I think that their spray head scheme didn't work too well -- it's likely that most of the spray quickly wound up flowing down the inner walls of the tube.  The laminar flow of the counter-flowing air then formed a "dead layer" that prevented good contact between the bulk of the air and the water or desiccant.  There are devices called "turbulators" that break up laminar flow into more-turbulent flow that might improve the performance of those towers.

So, what is liquid desiccant (LD) and why is it particularly useful for drying air for A/C purposes?

Folks should be familiar with one-shot desiccants like the silica gel packets found in prepackaged food, vitamins and other food supplements, or products like "Dry-Z-Air", used to capture moisture in locations like RVs, closets etc.  In the latter case, it actually uses the same chemical that is often used in LD applications -- calcium chloride.  I should add that all these desiccants can be regenerated by getting them hot enough to release the water they have absorbed.  I have purchased silica gel beads that actually have an indicator in them to show when they are exhausted and need to be baked so they can be re-used.  And I've seen at least one blog post where someone did something similar with calcium chloride, but it was a pretty dangerous process -- it's necessary to get CaCl pretty hot, and at that temperature it is very corrosive.

There are other solid desiccants like zeolites, some types of clay, molecular sieves etc.  They HAVE been used to perform continuous dehumidification by putting them in a rotating wheel or drum configuration.  One side of the drum is heated and air is passed through it.  The high temperature plus air flow pull the water out of the desiccant.  Then the wheel rotates out of the hot zone into a cool zone, so the desiccant can again absorb moisture.  Then inside air is passed through the wheel and dried.

Systems like this have been used in industrial applications where other process machinery generates high temperatures, so the heat is re-used.  Since the desiccant wheel would need to be heated anyway, this equals a savings in money.  They aren't used for private houses because houses typically don't have that kind of high-quality waste heat available; and they also are pretty large so there's enough capacity in the system to significantly dry the air.

In contrast, LD solutions -- typically they use something among the following:  lithium chloride, calcicum chloride, potassium formate or potassium acetate -- don't require really high temperatures to be regenerated.  In fact, they can be regenerated with systems that are very similar to (good) solar hot water heaters.  This is very attractive because typical demand coincides with lots of sunlight around.  Once your solar LD heater is built, the energy is "free".  Not quite because it has to be pumped through some other apparatus, but that doesn't take much energy to accomplish.

Most research in the field has found that lithium chloride is the most efficient LD.  It also is the most expensive so it's automatically eliminated from my consideration.  Among the rest, calcium chloride probably is the most efficient but it has some problems.  The first is that the solution, which is about 35-40% CaCl, is very corrosive so the pipes, pumps and heat exchangers used to heat and cool it have to be either plastic, stainless steel or ceramic.  This jacks up the price, at least for heat exchangers and pumps.  Of course, its corrosive nature is worse at elevated temperatures so a good design approach is to place our expensive pumps in the loop where the LD is at its lowest temperature.  This would right in front of the regenerator, which heats the LD up in order to shed the water it absorbed.  Another problem is that concentrated CaCl solutions have a very high freezing point, 40F and higher so it's necessary to keep the solution warm enough so it doesn't freeze and stop the system from working.  The other problem also is related to CaCl's  corrosive nature, and that is "carryover".  Since the dehumidifier designs have to put interior air and CaCl solution in intimate contact, there is the possibility of CaCl solution droplets being carried into the interior space, where they can corrode metal and degrade fiber -- like rugs, furniture, clothing....so the design of the absorber portion of the system is very important.  This, by the way, is another problem with the Tech Ingredients approach because they deliberately try to atomize their LD solution.   They are depending on some kind of post-absorber filtration setup, one way or another, to prevent that.  Absorbers that use air flowing at relatively high speeds are particularly susceptible to this problem.

Other LD solutions like potassium formate and potassium acetate are more benign in this regard, but they (1) aren't as efficient, (2) are more expensive; and (3) in the case of potassium acetate, its solution is reported to be very viscous so it is hard to pump it through the dehumidifier system.

It appears that the best way to prevent carryover is to use either packed-bed absorbers or so-called falling-film absorbers.  Unfortunately, the best media for packed beds is pretty expensive -- I calculated that a 1 cubic-meter absorber would require over $2,000 worth of media (basically specially-designed plastic whiffle balls).  So some kind of falling-film scheme looks best.

For developing different types of absorbers I'm planning on sampling the exit air with a high-voltage arc to ionize any calcium ions that are present, to be analyzed with (naturally, a home-made) visible-light spectrometer.  That will quickly reveal if the design has any carryover or not.

The Tech Ingredients' second design is meant to use the same LD solution to simultaneously cool the air and dehumidify it, in contrast to their first design which just dehumidifies the air entering an evaporative chiller.  However, their second design depends on an unassisted evaporative chiller to cool the LD solution -- not viable for a region that has high humidity, since the ability to cool the LD solution is limited.  The problem with their first design is sort of related, because they're using an unassisted evaporative cooler to chill the LD solution.  There are two alternatives that could improve the situation.  First, build an oversized chiller using an air pre-cooler to sorta-kinda replicate a Maisotsenko-cycle system; and use the chilled water to both cool the house and operate an LD dehumidifier's absorber in a separate system to control the house's interior humidity level.  The second is a kind of bootstrap system where the chiller is fed by an outside "feed" air flow that has been dehumidified by an LD system -- which in turn uses the same chiller water.  It's bootstrapped because as the chiller operates the dehumidifier front end, the dehumidifier becomes more and more effective -- it's helping to decrease the wet-bulb temperature because the feed air's RH is reduced by the dehumidifier, so the chiller water temperature goes down and further reduces the RH of the input air.  And so on.  I haven't found any papers that describe a system like this so at this point it is a wild guess on whether or not it is a real improvement or not.

Friday, September 9, 2022

Cycles: The Mysterious Maisotsenko Cycle

 The M-cycle is touted as a new thermodynamic cycle that will solve the world's air-conditioning problems (natch, by the companies selling them).  But is it really that good, and just how does it work?  When I look at drawings of air conditioners that use the M cycle it seems pretty confusing with all the different pieces and "wet channel" and "dry channel" stuff.  Not to easy to figure out, perhaps deliberatly so.  But by looking more closely at our trusty Psychometric chart things start to become much clearer.

If you have looked at my previous blog posts on DIY A/C you have already seen this:


It shows the different "paths" taken by evaporative coolers (solid line) and the more common compressor-based A/C systems, shown by the dotted line.

Suppose we sort of combine them.  Let's add a special type of heat exchanger, very similar to what's called an HRV, a Heat Recovery Ventilator.  It is an air-to-air heat exchanger used to replace stale air inside a house with fresh exterior air, while recovering the heat contained in the exhaust air.  They typically are cross-flow devices that use stacked corrugated plastic sheets -- the interior air flows across the outside surfaces of the sheets and the exterior air flows at right angles through the channels formed by the corrugations.  Or vise-versa, makes no difference.  This is a simplification because the air paths have to be kept separated so they only exchange heat -- they can't mix.  I have seen a number of DIY versions so making your own HRV is definitely feasible.  The biggest problem is that the corrugated plastic sheets are somewhat expensive, but I think I can make a similar kind of device using corrugated metal roofing with insulated panels on each side to force the air to flow down the corrugations.  It's much less expensive but (probably) more bulky.  Since it would be a type of counterflow system rather than the conventional cross-flow of other HRV's it might be pretty efficient.  The corrugated-roofing approach will likely be the subject of another blog.  For now, I just need to point out that making your own air-to-air HRV is not much of a stretch for an intrepid DIYer.

So, let's place our home-made HRV inline with our home-made evaporative chiller.  The chiller's input air comes from the output of one of the HRV channels, and the chiller's output air is routed to the other HRV's channel.  In this way the input air to the chiller is cooled before it enters it.

This might seem like a waste of a perfectly good HRV because we know that the RH of the cooled air increases, which decreases the effectiveness of our chiller.  And so it does, but that is more than offset by the attendent decrease in the resultant wet-bulb temperature.  I can show that by modelling our new system in a stepwise manner, like this:

Step 1:  We turn our chiller on.  The air entering it is at ambient temperature.  The air passing through the chiller follows the solid-line path on the psychrometric chart, and exits at a temperature close to the wet-bulb temperature.  It won't be equal to the wet bulb temperature because chillers aren't 100% efficient at transferring the full temperature drop of the water to the air.  Let's say that the chiller is 90% effective at that, so the air exits at 22.3C.  From there, it passes through the HRV, cooling the air entering the chiller.  Let's say that the HRV also is 90% efficient.  That translates to the chiller getting air that's been cooled to 23.3C.

Step2:  The chiller further cools the 23.3C air.  Looking at our psychrometric chart, we follow the dotted line over to where it intersects the 23.3C point on our temperature axis and see that the wet bulb temperature now is 18.5C.  This is almost 5 degrees Fahrenheit lower than the exit air we got in step 1.

Let's do one more step, just to see what happens.

Step 3:  Given the same efficiencies of our chiller and HRV, the air entering the chiller now is at 20.3C, giving us a wet-bulb temperature of 17.5C.  This is a further temperature reduction of 1 degree Centigrade, for an overall improvement of 6.7F.  Assuming the same efficiencies as before, the ambient air at 90F has been cooled to 64.4F.  For comparison, a single-pass chiller would output air at about 74F.

If we model our system in a continuous rather than stepwise manner we will find that the chiller's exit air asymtotically approaches the dew point, which is about 15C.  It will never get there because we have to evaporate SOME water to get any kind of cooling at all.  And in a real-world A/C system using this approach there will be significant heat input from the house we are trying to keep cool.

I think this is the basis of M-cycle air conditioning.  One additional wrinkle is that the M-cycle messes around with the relative volumes of air (via the Wet and Dry channels) so the cooled air delivered to living space isn't as humid as it would be in my example above.  However, since I'm going to run the chilled water through a water-air heat exchanger placed inside the house, I don't need to worry about the RH of the air exiting my DIY M-like  A/C system.  Just water leaks, perhaps from condensation on the heat exchanger (HX for short).

A system like this, unlike a compressor-based system, does little to nothing to address the increased RH due to the temperature drop.  However, there are ways to address this, also in a DIY manner that I will describe in yet another blog post.  It uses calcium chloride, but not as a one-shot "dry-z-air" type of system.  That's all I will say for now on that subject.  It gets complicated when we throw in dehumidification.

To summarize, we can noticeably improve the effectiveness of an evaporative cooler by adding a relatively simple air-to-air heat exchanger to the air flows entering and exiting the chiller.  

A do-able DIY system would likely be an indirect-cooled one, where the cold water in the chiller would be pumped through a water-air HX inside the house.  The HRV could be made from either a stack of metal sheets ($$$), corrugated plastic sheets ($$) or -- perhaps -- corrugated metal roofing panels ($).  In addition to cost, those options are approximately ranked in order of their physical size.  I'm guessing about the use of the corrugated metal but I think it's likely to take the most room.  However, it will be outside the house so that will be less of an issue.  If need be I think it's possible to stack the metal panels so we still get decent HRV efficiency in a smaller space.  The HRV design will be more complicated but, again, feasible for a good DIYer to make.

The chiller design also will be more complicated because the supply air has to come from our HRV and its exit air has to be routed back into the HRV.  My original open-sided design would have to be put in a sealed box that (1) provides for relatively unrestricted air flow and (2) keeps the input and output air flows well separated.  The four-sided tower scheme might have to go away.  A chiller using a single evaporation pad would be very easy to make (just a box with the chiller in the center), but would have to be pretty big to have the same surface area as the tower.  Maybe a set of pads placed in a "W" pattern?  How do I get water to them without introducing air leaks?  And just how much surface area do I need for the pad(s), anyway?  Does the enclosure need to be insulated? Time to do some thinking and sketching..


Wednesday, September 7, 2022

Evaporative Cooling Vs. Compressor-Driven A/C

In this post I'm going to explain more concerning what I've learned these two types of air conditioning.  In case new readers are wondering why I'm interested in evaporative cooling, it's because the technology is pretty easy to build yourself -- but with that, there are definite limitations that come along with it.

The psychrometric chart shown below has been marked to illustrate the two different kinds of cooling.  I'll then discuss some interesting differences between them.



I've drawn a solid line and a dotted line.  They both start at the same point, 32.2C (approximately 90F) and 36% relative humidity (RH).  That was the outside afternoon temperature at our house a week or two ago.   The solid line is drawn along a constant-enthalpy line, which just means that the total energy of the system remains constant.  Note that the relative humidity increases and so does the humidity ratio (basically, the amount of water in the air air, shown on the right side of the chart).  This shows what's going on when evaporative cooling is taking place.  You might think that this mechanism can't occur without a change in energy because the air is being cooled:  but that is balanced by the energy carried away by the water as it changes from a liquid to a gas.  To distinguish the two "forms" of heat, the energy contained by the air (oxygen, nitrogen and a small amount of carbon dioxide) is called "sensible heat".  Possibly because it can be "sensed" by a thermometer?  I haven't investigated the origins of the name so that is just a guess.  And the energy contained by the water vapor is called "latent heat", because it only plays a role when the water either evaporates or condenses.  Latent heat is a big deal in the A/C world because in humid climates it can be a substantial contributor to the energy (as in, coming out of a wall socket) needed to cool and condition air.

The humidity ratio for evaporative cooling increases because evaporating water is being used to cool the air, so the amount of water in the air increases.  So the latent heat increases, balancing the sensible heat drawn out of the air:  so the overal energy (enthalpy) remains constant.

The problem with so-called "swamp coolers" is that they are not very effective in humid climates, for two reasons.  As the humidity increases, the wet-bulb temperature increases so the chiller can't deliver air that's much colder than what entered it.  And the second problem is that the chiller increases the relative humidity of the air that exits it.  This reduces our body's ability to cool itself via evaporative cooling, so our perception of comfort is reduced

Now lets move on to the solid horizontal line.  That is what is going on when conventional compressor-driven cooling occurs.  The line follows a constant-humidity line because the amount of water in the air doesn't change.  Since there is no phase change, at least down to the dew point, the total energy in the air decreases:  the enthalpy decreases.  However, closer examination of the line shows that the relative humidity increases.  This is because cooler air has a reduced capacity to hold water vapor.  When the temperature reaches the dew point (at 15C/59F), the relative humidity reaches 100% and water starts to condense.   It takes a LOT of energy to condense water so once that happens it suddenly takes a bigger A/C unit to get the temperature to decrease.  The other factor that comes into play is our perception of comfort when humid air is cooled.  59F is pretty chilly, so let's say we just cool the air down to 68F (20C).  Our chart indicates that the air's relative humidity now is about 75%.  This is pretty humid so we don't feel all that comfortable -- our body's ability to cool itself is reduced because we can't cool ourselves as effectively due to _our_ evaporative cooling.  At the dew point our body can't cool itself at all via sweating so 59F would actually feel very uncomfortable.  The other downside to high humidity is that it promotes the growth of mold and mildew, steel parts rust and so on.

To improve the comfort level, most A/C systems deliberately cool the air to the dew point in order to force it to condense.  The cool air exiting the A/C unit has a lower relative humidity due to the condensation.  But now we have the reverse problem -- the air feels TOO cold for comfort.  How many of us have had the misfortune to be seated at a restaurant directly below an air conditioner vent?  Feels pretty cold, huh.  Well, it actually could be worse because commercial systems actually use extra energy to deliberately WARM that cold air back up some.  More sophisitcated A/C systems can recycle the heat they extract from the incoming air via a heat exchanger so the energy cost is lower:  but the cost of such an A/C system is higher.  

Here's a factoid.  An A/C system that returns all the heat energy back to the interior space it's in might seem ridiculous because it doesn't cool the room -- but it DOES reduce the relative humidity.  This type of system is called a dehumidifier.

So on the one hand we have evaporative cooling systems that work well in very dry climates but become less and less effective as humidity increases.  Unfortunately, in many parts of the world high temperatures are accompanied by high humidity so they aren't nearly as prevalent as compressor-driven A/C systems.  

In contrast, compressor based A/C can dry the air too much if it's used in dry areas of the world; and in humid areas a large percentage of the energy they consume is just used to pull water out of the air.  In hot humid locations, the energy consumed by A/C can be a large percentage of the total energy consumption of a household.

Both systems have their advantages and disadvantages, so it's no surprise that there still is considerable research and development going on to mitigate the disadvantages.  I'll go over some of those efforts in furture blog posts on the subject.

Tuesday, August 30, 2022

My Evaporative Cooling Test Bed, A Review

While working on my previous post regarding the usefulness of the Psychrometric Chart, I had a thought regarding the test bed I built last season.  I noticed large differences in the exit air temperature between my setup and a similar one built by Desertsun02 -- his system was outputting colder air than mine.

In retrospect, this probably is due to different cooling pads.  Doing some online searching revealed that different pads have higher efficiency compared to some of the synthetic ones, like the ones I'm using.  The old-fashioned shredded Aspen pads apparently are pretty good, as well as paper ones with a honeycomb pattern.  One big difference may be that the synthetic pad type I'm using isn't very thick so the air doesn't have as long a "dwell time" in the pad compared to thicker ones.

I'd like to get the exit air temperature lower because I think I can use it to pre-cool the air flowing _into_ the evaporative cooler.  That could get me closer to performance like the Maisotsenko Cycle, which theoretically can output air that is very close to the dew point. So it looks like I need to experiment with a different pad, along with everything else.


Monday, August 29, 2022

Air Conditioning With A "Swamp Cooler": the Psychrometric Chart

 As the climate heats up and energy resources become more and more stressed, interest in alternative approaches to compressor-based A/C has increased.  A lot.  The basis of many alternatives circulates around evaporative cooling technology, most commonly referred to as the swamp cooler.

As a kid I remember swamp coolers in two different situations.  For some time we lived in the Four Corners area, where Colorado, Utah, Arizona and New Mexico meet at one point.  The area is high desert, hot and dry in the summer; and that's perfect for the old-style swamp cooler.  The version we had in our house took hot and dry air from the outside and blew it through a water-soaked membrane.  Evaporation occurred, which cooled the air and also added some welcome humidity to the air, which usually had a very low relative humidity.  The cooled and wetter air was blown into the house using the same heater ducts that were used to heat the house in the wintertime.

The other situation was during summertime visits to relatives in Oklahoma.  At that time compressor-type A/C was expensive, so they couldn't afford to get that.  So they used swamp coolers there, too.  But in that case, while the air temperature was comparable to what we got in the Four Corners area, the humidity was much higher.  In that case, the swamp cooler was less effective for two reasons.  First, the high humidity reduced the amount of evaporation that could occur in the swamp cooler.  The second has to do with our perception of comfort.  When in a high-humidity environment WE also are less able to cool ourselves, because our sweat is less able to evaporate.  The end result was that the swamp cooler in Oklahoma really didn't make me feel any cooler than just staying outside in the shade, hoping for a bit of wind to come by.

This is where the Psychrometric Chart comes in, to help us understand what's happening.  I'm working on an upcoming post where I (hopefully) explain how a relatively new evaporative cooling technology based on something called the "Maisotsenko Cycle" works, and how a version of it can be relatively-easily added to your basic evaporative cooler, to significantly improve its performance; and the explanation heavily depends on use of the Psychrometric Chart.

Anyway, back to our simple swamp cooler.  Today I measured the exterior peak temperature and ambient relative humidity and got 32.2C (just shy of 90F) and about 37% relative humidity.  I plotted that point on a copy of my Psychrometric Chart and it looks like this:

The vertical axis is the temperature, but the relative humidity curves are the upward-trending ones as you look from left to right.  The dark point shows the conditions at our house.  Another set of curves are straight lines that go down from left to right, not quite at a 45 degree slope.  Those are lines of constant wet-bulb temperature, and give the temperature of the water in the swamp cooler membrane.  In this case we get about 70 degrees Fahrenheit.  That sounds pretty good, dropping the exterior air temperature down to 70F:  but that's the temperature that the WATER gets to.  My previous experiments with a home-brew swamp cooler show that the exit air temperature can be ten degrees higher than that, perhaps more if the air flow is excessive.  This means that the air temperature coming of out my swamp cooler might be about 80F.  Better than 90, but that wet-bulb temperature sure sounds better.

Some may wonder why the air and water temperatures aren't the same.  I think that's because the air carries off the heat extracted from the evaporating water.  Also based on my experiments, excess air flow also can be a factor.

The difference between the wet-bulb temperature of 70F and the exit air temperature makes the use of a slightly-more complicated system attractive.  That's an indirect evaporative cooler, where the chilled water is piped into the interior space and passed through an air-water heat exchanger.  In this case, the water also is continuously circulated through the evaporative cooler because we want to use the chilled water to cool the house.  A level sensor detects when the water level in the chiller falls to the point where it needs to be topped-up.

In either case, once the exterior relative humidity rises above about 50% they become pretty ineffective as an A/C system.  But there is a way to get the water in the chiller colder, approaching the dew point temperature (rather than the higher wet-bulb temperature).  In the case of my example, the difference is about 5 degrees C,  which would produce water at about 60F, ten degrees colder yet.  More on that in another post, which includes a more in-depth exploration of the Psychometric Chart.

Some may wonder why the wet-bulb temperature is higher than the dew point.  I think that is because there are two effects that are in equilibrium at the wet bulb temperature.  The first is the heat extracted from water by evaporation.  The second is the heat contained in the air being transferred to the wet bulb.

Sunday, August 28, 2022

Check Valve Redux

 I had an opportunity to test my bearing-ball check valve idea, at least in terms of how it works in a pneumatic sense.  I discovered that my idea was flawed by the need to incorporate two incompatible requirements.  The first:  the ball has to fit easily into the end of the tube to seal.  The second:  the force pulling the ball into the tube has to be small so the check valve works with a very small pressure differential.

The practical result of these requirements was that the hole in the piston where the ball goes has to be very nearly the same diameter as the ball.  But this means that the air has to flow through a pretty small constriction around the ball. The end result was that there wasn't much of a difference in the flow rates.  I thought of some ways to get around this problem but they all had their own complications -- including noticeably more machining work.

So instead of that approach I went with a much simpler flap valve arrangement, made with a square piece of acrylic film.  The film is placed over a hole drilled in the base, which serves as the air inlet.  To promote free flow of the air, I also milled a shallow slot from the end of the base up to the hole.  I cut a square piece of the plastic film and then made three cuts in the form of a U, to free that portion of the plastic, enabling it to move up (away) from the hole, or down toward it to form a seal when the piston is being pushed into the cylinder.  That worked OK, but it worked even better when I made more cuts to widen the gaps between the flap and the rest of the plastic sheet.  I think the narrow slits didn't allow the flap to completely seal.

Putting a little lubricating oil in between the plastic and base would probably work even better -- for awhile, but the oil could attract dust and then the seal would likely fail.  In this case a little less (of a seal) is more, in terms of longevity of the damper.  That said, I also made the damper so it can be taken apart and cleaned if that becomes necessary.

Wednesday, August 10, 2022

A Check Valve For My Soft-Close Drawer Project

 I had originally intended to use an off-the-shelf check valve to use in my pneumatic soft-close mechanism, but the more I played around with the overall idea, the less suitable it appeared to be.  It would have to hang off the side of the cylinder.  The piston needed to be "special" to accommodate the need to put a hole in the cylinder for the air to pass through.  And so on.

So I was thinking about integrating my own check valve into the piston itself.  The original design used the aluminum stem for the air flow (it is a tube), with a ball bearing to act as the seal.  The bearing would be pushed against the end of the tube with a small spring.  The tube, bearing and spring would be installed in the piston.

But my piston is only .75" long, and it was hard to find the right itty-bitty spring so I decided to replace the spring with yet another magnet that would pull the bearing onto the end of the tube.  The magnet would be a ring magnet that the aluminum tube slides through, and would be glued to the top (or outer) side of the piston.  I liked this approach because it's based on a simple physical mechanism (magnetism) and should work OK for a very long time.  The only thing that might mess it up is dust and dirt between the ball and end of the tube; and if that becomes a problem I could glue a small air filter to the bottom of the piston to junk out.  Maybe I'll be pro-active and just do that up front :).

But the question arose:  would there be enough force acting on the bearing ball to pull it into the end of the tube?   Or would the force be too great so the check valve wouldn't permit the damper to easily open (and therefore cause the drawer to not easily open)?

This looked like another simulation to perform using FEMM, Finite Element Modelling, so I could answer these questions.  Long story short:  it looks good.  The attractive force is small, on the order of 3.9 grams; but since the check valve will be on the horizontal plane it won't take much force to pull the ball toward the end of the tube.  And 3.9 grams over an area of .049 square inches (the cross-sectional area of the tube) indicates that the check valve should open with a pressure differential of just .17 pounds per square inch.

Here's a screen shot of the simulation:


I included a steel plate that will be used to mount my damper on the back of the base unit where the drawer goes, just to make sure it wouldn't cause problems in the operation of the check valve.  And it doesn't alter the results to any great extent.  It does increase the attractive force between the magnet and baseplate, but it still is pretty low, about 14 grams.  That's less than one ounce.  At that point in the operation of the system -- damper plus long-distance magnetic latch -- the attractive force of the mag-latch is MUCH higher so it won't materially alter how the overall system behaves.


Friday, August 5, 2022

Forever Products -- Why Not?

 A significant part of the waste we all send to the landfill are products that fail due to some proprietary component,  or because they weren't designed to be repairable.  Often the bad component can't be replaced because the manufacturer either (A) doesn't offer it  (B) they did but only for a short while; or (C) it isn't possible to replace the failed part because the product wasn't designed to permit that.

All these issues are things that can be addressed in a variety of ways.  While the "right to repair" movement has gained some traction, my examples in the previous paragraph show that it can only go so far -- unless the design process used to make our "stuff" includes the requirement that the item can be repaired for a very long time, even long after the original manufacturer has gone out of business.

Our military has some similar requirements, considering how large their inventory of materiel can be, and concerns regarding the availability of replacements in a wartime situation.  But, considering the concurrent issues of waste reduction and the reduction of resource depletion -- both mineral and energy resources -- and the rather large multiplier of a consumer economy at work -  that also has large implications, given current trends.

So, what's in the way of making products that use off-the-shelf components as much as possible, so they can be replaced long after the manufacturer has declared the product obsolete?  What's in the way of requiring manufacturers to provide design data on their proprietary components for those same products so they can be made with 3D printers?  Many companies these days employ designed-in obsolescence as a part of their business model, so they can sell new stuff.  But that tactic has become a larger and larger problem, given the issues of waste and all the resource-consuming aspects of making new items that, in many cases, aren't any better (often less) than what they replaced.

This is where government has a role to play, basically drawing a line and saying that youse-guys have to clean up your act.  Of course, manufacturers should be able to offer new products with new features:  but they need to both design their products so they can be repaired; or if some unique parts are in there, once they have come out with a new model they have to make the design information available for the older item so replacement parts can be fabricated by a third party, or, if possible, with a 3D printer.

The overall impact of this would be multifold.  For starters, manufacturers that simply "churn" their products so older, but equivalent, products become obsolete, will have a greatly reduced incentive to do so.  To appeal to consumers, new products would only succeed if they offered better functionality, or added functions.  This would promote innovation rather than just putting a different color of lipstick on the same pig.  Of course, the new products would have to be Forever Products too, so the pattern of innovation would continue.  Or maybe some vendors would just offer Forever Products and tap into the demand for something that can be repaired until long into the foreseeable future.

Some manufacturers are very good about providing replacement components for the products they sell, but they seem to be in the minority.  That has to change.

Saturday, July 16, 2022

The long-range magnetic latch: simulation oddities.

 While doing additional simulations I got really strange results if the magnet was more than about 1 inch long.  The initial force between the magnet and iron pole piece came out as a negative number!  It didn't make any sense.  Then I started thinking about the simulation "universe".  Part of the initialization needed for the simulation is to define the boundary of the problem, basically how large the space around the magnet and pole pice is.  I'd been specifying a relatively small distance that defined this, because increasing it also causes the simulation time to increase.  Well, it turns out that was a bad idea for some cases.  When I increased the size of my simulation universe, suddenly the force numbers all made sense again -- they all were positive, no mysterious repelling force was present.

Even simulations with shorter magnets that did not exhibit negative force came out slightly different when the boundary distance was increased.

Moral of the story:  even though it can cause simulations to take longer than you like, it's important to make the problem boundary large enough.  Start with a smaller size, then increase it to see what happens to the simulation results.  Look at the results you get with a critical eye to see if they make sense or not.

Tuesday, July 12, 2022

More About 2 to the millionth power: How Many Digits Does it Have?

Any old scientific calculator can tell you that log(2) is approximately equal to .301030.  The log of 2 taken to the power of one million (2^1000000) is log(2)*1,000,000, or 301030.  It's not trivial to use this number to determine 2^1,000,000 down to the last digit, but it IS trivial to say how _many_ digits the number has.  It is: 301,031 digits (because 10^0 is equal to 1, we need to add 1 to our result, then throw away the non-integer part).

For a simple verification of my assertion, let's take a look at 2^10, which is easy to compute:  it's 1024.  10 times .301030 is 3.01030.  Adding 1 to this is 4.01030.  Taking the integer value we get 4, and that's how many digits 1024 has.

In fact, we can use this scheme for any power of 2.  For example, we know that 2^16 = 65536.  16 times .301030 + 1 = 5.816, so we verify that the scheme works for any power of 2.  The limitation to the approach is the accuracy of the value we use for log(2), which is a transcendental number so it has an infinite number of digits.  But there are some online calculators we can use to get quite a few more digits.  One reports that log10(2) = 0.30102999566398114, which should suffice for determining how many digits 2^N has up to N = 10^17.  According to an earlier post of mine we already know that its least-significant digit is a 6 😀.

By the way, we know that log(2) MUST be a transcendental number because the relationship I described above has to hold for any arbitrarily-large power of N for 2^N.  It doesn't matter if N = 10 or 10^10^10.... 

Monday, July 11, 2022

Magnetic Latch Simulations, Part Two

 I changed my simulation scripts for two variations on a magnetic latch mechanism so I could plot force vs. distance for them.  One is the simple magnet and flat steel plate, and the other is one of my long-range latch designs.  The differences are pretty stark:



The long-range latch force vs. distance plot looks a little bumpy but the important thing to note is that it is exerting pretty significant pull as far away as 1.6 inches, while the simple latch just starts to do its thing at around one-half inch, and even at that distance is far less "strong".

Based on this result I think a combination of my long-range latch and a simple air pneumatic damper to get the soft-close effect should work pretty good.  And there's hardly anything that can wear out.  The most likely failure probably would be the one-way valve, and if that's the case I could easily make one using a bearing ball. Putting a tiny magnet in there to retain a steel ball would enable it to work in any position.  Of course, I'll have to use FEMM to simulate that 😃.






Tuesday, July 5, 2022

A deep dive: A broken soft-close drawer mechanism and finite-element analysis

 This post really highlights nurd-dom.

For some background, about 11 years ago we built our house.  We included soft-close cabinets for all the drawers -- the kitchen, all the bathrooms and the utility room.  At the time, the cabinet vendor we chose was including soft-close drawers at no additional cost, so of course we got them.

Ten years in, one of the two soft-close mechanisms in our most-often used kitchen drawer failed.  It was the silverware drawer.  The nifty latch mechanism that catches and releases the drawer at the right point broke.  It was just plastic and apparently not really up to the job.  That failure turned out to be pretty minor since the remaining soft-close device was still doing a good job.  However, about a week ago it also failed.  The latch thingie didn't break but it wasn't holding, probably due to wear.  It made a very annoying "sproing" sound every time the drawer was opened, when the latch let go.

Being a DIY kind of person I removed the drawer and examined the mechanism and figured out that it couldn't be repaired, so I just took the broken soft-close mechanism off the drawer slider.  Because the drawer no longer stayed closed I made a simple magnetic-latch using a counter-sunk ring magnet screwed to the back of the drawer and a wood block topped by a piece of steel, screwed to the back of the base unit.  It works to keep the drawer closed but it's really easy to close the drawer too hard so it bangs into the base unit, and in some cases has bounced back out.  I looked at some off-the-shelf soft-close replacements but the only ones compatible with the rest of our drawer hardware were exactly the same design as the one that had failed. I didn't have good feeling about that so decided to look elsewhere.

I started thinking about some kind of damper to slow that final approach so the drawer behaves more nicely when it's closed.  One my goals was to make something that is much more reliable than the original version, so I looked at a type of pneumatic damper to complement the (presumably pretty reliable) magnetic latch.

The basic idea I came up with was to make a piston and cylinder with a one-way valve so the piston could be easily withdrawn but the valve would close when the piston was being pushed back into the cylinder.  Air leakage around the piston would be slow enough to produce some back-pressure and slow the cabinet's entry at the end of its travel.  A rod attached to the piston would extend out so the back of the drawer would push against it and the piston.  To pull the piston out, the end of the rod would have a small magnet.  The magnet would be attracted to another steel plate, this time mounted on the back of the drawer.  Sounds a little complicated, but the idea was to use simple physical phenomena rather than a complicated and fragile mechanical latch mechanism to do the job.

I was pretty sure the damper would work, but the problem then came back around to the magnetic latch.  When the drawer pushes against the piston mechanism the force will initially be fairly high, so the latch needed to be able to exert enough force over about an inch's worth of distance to slowly pull the drawer in against the damper's resistance.  The force between a magnet and a flat plate, my basic magnetic latch design, has an extremely nonlinear relationship with regard to the distance between them.  It starts out very low and stays that way until the magnet is very close to the plate.  I wanted to extend the attractive force, to make my system work better -- or, perhaps, to enable it to work at all.

The thought I had was to make a different kind of steel piece to attract the magnet.  The idea was to make an iron cylinder with a Vee-shaped interior, where the magnet would travel inside to the bottom.  The iron would start out being fairly close to the magnet so the initial pull-in force would be sigificant, but to ensure that the magnet would continue to travel inside the cylinder its interior would be machined to have a V-shaped profile, becoming smaller as the magnet went inside it.  This way the magnetic force would act to continue to pull the magnet into the cylinder.

That geometry looked to be pretty difficult to get right -- it could take a lot of experiments to figure out what would and wouldn't work very well.  So I turned to software, in the form of a magnetic-field simulator called FEMM.  It solves magnetic field problems using a technique called finite element analysis, and one of its features is that it can calculate the force between a magnet and an arbitrarily-shaped iron pole piece.  Perfect....except that I wanted to easily change things like the angle of the V profile, the dimensions of the magnet and other features that I thought might make it all work better.  For simple problems you can create shapes by using your mouse, but that isn't very easy to use when creating precisely-shaped features.  Fortunately, FEMM also can be operated using a basic-like scripting language (called LUA), enabling me to create all the geometry with a program, then run the simulation and display the results, so I could easily change the physical design using a text editor and quickly evaluate the result.  For an example of the program's output, I offer this screen shot:


The intensity of the magnetic field is depicted by colors, where teal is very low and red is high.  In this model, the latch is basically shown from a top view, where the back of the latch is at the top of the screen.  The back of the iron pole structure has a hole in it to reduce the final holding force of the latch.  My simulations showed this worked to get the pull-in force to be comparable to the hold force, which in this case is the force needed to separate the magnet from the back of the pole piece.  It is NOT the same as the "lift capacity" of the magnet as specified by vendors like K & J Magnetics, due to the presence of the hole.

This is a theoretical study -- clearly, a structure with just the magnet and pole piece would be unstable because it would only take a minute offset one direction or another to cause the magnet to snap over to one side or the other of the pole piece, messing up my nice simulation work.  To prevent that, the inside of the pole piece actually will have a plastic insert (cast or machined) to fit closely between the pole piece and magnet.  That will keep the magnet centered so my simulations should be reasonable approximations to what actually happens.  I hope.....

I'm thinking that it may be possible to integrate the damper and long-range latch into one unit, but initial testing will be done using two separate parts to evaluate their separate functions.  One real concern is how to install the pieces correctly, to ensure proper operation.  When the drawer is installed it's almost impossible to see how everything lines up so that problem will need to be addressed.

More later :).


Monday, May 30, 2022

Is 2 taken to the power of one million minus 1 a prime number? NO! and I didn't have to calculate it to find out......I used some LSD

 Mersenne prime numbers are ones that have the form 2^N - 1.  Not all (actually relatively few) prime numbers have this relationship, and of course not all numbers that can be calculated using that formula are prime.  For a simple example, 2^4 = 16.  16 - 1 = 15, which is divisible by 3 and 5 -- so it's not a prime number.

You will have to read on to learn about the LSD.

Prime numbers are important when it comes to generating highly secure encryption codes, so they have been of interest for a long while.

For some reason, perhaps yet another sleepless night, I started thinking about powers of two, in terms of their digits.  More specifically, if the least-significant digit of them has any kind of pattern to it.  Some simple mental arithmetic revealed the answer, and it should become obvious when I write down the first few powers of 2, starting with N = 1:   2, 4, 8, 16, 32, 64,128, 256....and so on.  Looking at the Least-Significant Digit (the LSD, gotcha!!!) of this series we see:  2 4 8 6 2 4 8 6 .... so we have a sequence of 4 digits that endlessly repeats:  2 4 8 6 .... A little more mental gyrations and I came up with a way to predict what the first digit of any power of 2 is.  It does take a little more math, requiring the use of the Residue function.  Residues are calculted by getting the remainder of long division.  It's easier to show by example, like this:  take a look at 10/4.  Long division gives us a quotient of 2 because 4*2 is the nearest multiple of 4 that is closest (but not larger than) 10.  10 - (4*2) gives us the remainder, 2.  That is what the Residue function produces -- the remainder.  So if we examine the Remainder of (any whole number)/4, we find they can only be either zero, one, two or three.

Now let's create an array with the values [6,2,4,8] in it.  The array entries are a little different than what you might expect since the indices into the array are 0, 1, 2 and 3....4 isn't possible because its residue is zero.

Now let's determine what the LSD of 2 taken to the one-millionth power must be.  Some simple math says that the remainder of 1,000,000/4 is 0 (this will be true for any power of 10 greater than 1).  The first entry in the array is a 6, so we know that the LSD of 2 to the millionth power is a 6.

Recall that Mersenne primes have the form 2^n - 1.  If we subtract 1 from 6, we get 5, and all numbers ending in 5 are divisible by 5.  Therefore, it is NOT a prime number.

For the same reason, we also can say that any number calculated by evaluating 2^(10^n) -1 also are NOT primes, as long as n is greater than 1.

It so happens that we can use a similar but slightly more complicated scheme to determine what the next-most-least-significant digit (NMLSD) of a power of 2 is.  I won't go into much of it here except to say that the sequence has a length of 20.  The NMLSD of 2^1,000,000 is a seven.

After that the sequences become ever-longer so the approach becomes less and less viable.  If nothing else, it becomes necessary to accurately calculate some pretty large numbers, just to examine their smallest parts.

Friday, May 13, 2022

Touch Sensor Update

In some ways it's been a rough year, what with Covid and a rental-rehab project we found ourselves saddled with.  Supply-chain and contractor issues caused problems; and in many cases we found that the most expedient way to move forward with the project was to do the work ourselves.  We did a lot of research before embarking on any of the major projects we had to do.  Anyway, all that delayed work on my touch sensor -- but now that we've got the rental fixed up (and rented), I've had time to work on some long-delayed personal projects.  That includes the touch sensor that I designed a PCB for.  I did have time to order the PCB and assemble one, but that's about as far as it got until recently.  I finally was able to hook up my 4-point sensor connectors and test the thing:  and, what a surprise -- it worked, right off the bat.  

The wires connecting the modified battery-charger clips to the circuits are a mess, since I used individual wires but I have some cable management stuff I can wrap around them to make it all a little less like an octopus waiting to snare me when I pass by.

Now I'm working on a really sad antique dresser we bought a few years back.  When we bought it we didn't realize what bad shape it was in, so it needs some work -- to put it mildly.  I think a child may have used some of the drawers as a ladder and stepped through the bottoms.  The dovetails on the lowermost drawers were loose, and the rabbets on several of the side pieces (the ones that hold the bottom piece in place) were split or just plain broken off.  I also had to reinforce the sides for a couple of them.  It has water damage, too -- the oak veneer on one side of the case has delaminated.  I'm not going to try to repair that for now -- it basically was purchased to put in a guest room so visitors should just appreciate having a dresser, however it looks (as long as it is usable, anyway).  The feet are a mess, too -- three have the remnants of some sort of steel foot, and there's nothing at all on one of them.  The steel will be pretty bad for scratching our wood floors so there's some work to do there before the dresser is put into service.  I learned a lot during our rental rehab w/regard to doing stuff like trim work so that will come in handy for this project.

Friday, May 6, 2022

Oxygen, the master vampire element

 As a preface to this entry, I'm going to write about my first real experience with what I now call the master vampire element, oxygen.  At the time, I was working on a different approach to etching gold.  Since gold is a relatively inert element, it takes some doing to etch it -- basically, turning the metal into a salt of some kind.  I was thinking about gold chloride.  Aqua Regia is a commonly-used etchant for gold, made by mixing nitric acid and hydrochloric acid.  Thing is, the mixture is unstable because the two acids react to form something called Nitrosyl Chloride -- and it quickly decomposes.  It also takes some time for NOCl (its chemical formula) to form so you're running a race between getting the etchant working and then using it before it decomposes.  There also are a number of different ratios given for the ingredients, probably because they come in a number of different concentrations.  So I had some interest in coming up with something that was more stable and more reproducible.  I had concentrated hydrochloric acid available, the same with 30% hydrogen peroxide, so I had the thought of combining the two to see how that would work.  The idea was that the peroxide would oxidize the gold and then the acid would react with the oxide to form its chloride.

Well, my new etchant sort of worked but it turned out to be even more unstable than aqua regia.  The REALLY interesting part was that my mixture quickly decomposed by releasing a green-yellow gas:  chlorine.  Well now, what was that about?  It didn't take long for me to realize that the hydrogen peroxide had done it, using its extra oxygen atom to grab two hydrogen atoms from two molecules of hydrochloric acid (HCl), forming one molecule of water and one molecule of Cl2.  Up to that point, I had thought that chlorine was a pretty strong oxidizer and was pretty safe from being affected by oxygen:  but my little experiment blew that notion right out of the water.  BTW I performed my experiment with just a small quantity of the two materials, under a fume hood so no harm done.

Now I want to talk a little about the idea of "valence".  Fundamentally, it means how many electrons an element in a compound has either gained or lost:  or wants to gain or lose.  Many reactions are all about electrons.  For instance, in the water molecule we have two atoms of hydrogen and one atom of oxygen.  Oxygen has a valence of 2, because it "wants" two additional electrons to fill its outer shell (and each hydrogen atom only has one to provide, so it takes two to form a stable molecule).  And oxygen REALLY wants those electrons, as shown by my little experiment.

It gets even more interesting though.  Looking at chlorine (Cl), it has a valence of 1 when it combines with things like sodium to form sodium chloride, table salt.  In that case chlorine is the oxidizer and sodium is the reducing agent.  But oxygen is such a powerful oxidizer that it can actually wrest electrons _away_ from chlorine, which in itself is no slouch as an oxidizer.  In fact, oxygen is so powerful that it can abduct SEVEN electrons from chlorine, forming perchlorate compounds.  They are used to make explosives in fireworks.  Perchlorate compounds themselves are extremely powerful oxidizing agents, so if mixed with things like charcoal and sulfur they are more than ready to go boom.  Perchlorates are not the only ones that are infected by the bite of oxygen.  Chromium trioxide (CrO3) is notable because it is in a +6 oxidiation state (3 * oxygen's valence-of-2 = 6).  Squirting acetone on a pile of dry chromium trioxide powder will instantly cause the acetone to burst into flame because it's just ripped apart by the combination of hexavalent chromium and oxygen.  Another good one:  the permanganate ion.  In that one, manganese is in a +7 oxidiation state.  By now it  shouldn't be much of a surprise that it also is an extremely powerful oxidizer.   It will react with a sugar solution at room temperature and turn it into black sludge in very short time.  When bitten by oxygen nitrogen suffers a similar fate and as a result becomes usable for things like explosives (think nitroglycerine) and rocket fuel.

In these instances, the base elements -- chlorine, chromium and manganese -- range from being a fairly powerful oxidizer to "not in my wheelhouse" -- but oxygen bites 'em and they turn into vampires themselves.  That's why I call oxygen the master vampire, because it can affect otherwise innocent elements and turn them into monsters, too.


Sunday, February 13, 2022

Touch sensor for lathe and mill setup

 Some time ago I learned about a machining web site created by Rick Sparber:  right here.  He has a number of interesting articles about DIY machine accessories and improvements, but one in particular caught my attention -- a simple touch sensor that can be used to set up a metal lathe or mill for machining metal.  

The design can detect a small change in already-low resistance.  The basic idea is to measure the resistance between the cutting tool and workpiece being machined.  When the cutting tool is NOT in contact with the work, the current path is through the machine -- the spindle bearings being the major source of resistance compared to the body of the machine.  When the tool comes in contact with the work, that is a lower-resistance path -- and that is the basis of the touch detector.

 It looked pretty good, but being an electronics kind of fellow, I thought there might be some room for improvement.  The idea was to change the design to allow 4-terminal or Kelvin sensing.  Rick's design uses the same wires to force current through the lathe/mill AND sense the voltage change when the tool touches the workpiece.  This means that the design is sensitive to contact resistance at the tool and workholder ends.  The 4-terminal approach avoids this problem by separating the force and sense connections.  A good article regarding Kelvin sensing can be found here .  I hasten to add that Rick has some more-refined designs that DO implement 4-terminal sensing, and interested readers should look into them, particularly if wanting a good milliohmmeter or CNC-compatible touch sensor.  But for various reasons I think my design is a worthy alternative to his simplest design, and I offer it here.

My design schematic:


The design is quite similar to Rick's, including the automatic power-up scheme implemented by Q1.  The major difference is how the inputs to U1-1 and U1-2 are connected.  They are routed to separate sense lines (although my design does permit simpler 2-wire use if that works OK on a particular machine).  Rick's original design used a couple of spade connectors and super-magnets to attach the touch sensor to the cutting tool and workholder, but that's not compatible with the 4-terminal approach.  One of his other touch sensor designs uses two miniature battery charger clips with the sense connections brought in via an insulated contact, and that's what I'm using with my design.  I drilled a hole in the jaw of each clip large enough to accommodate a 1/4" nylon screw, then chucked the screw in my lathe and drilled and tapped a #4-40 hole down the center of it.  A #4-40 brass screw was threaded into the nylon screw, then the nylon screw was attached to the jaw with a nylon nut.  The outside end of the brass screw has a nut to attach a spade connector for the sense line.  If it's not clear from my explanation, the brass screw head forms a Sense contact.  Experimenting with different-sized end mills suggested that it would work better if I added a washer underneath the brass screw, so I also did that.

I haven't had a chance to debug the design yet, but once that's done my plan is to make it open source.  More on that later....