Saturday 12 August 2017

Qm For Windows Moving Average


Historicamente, o único propósito para a modelagem era dimensionar sistemas de aquecimento e refrigeração, mas agora é usado para compensar a quantidade de isolamento, a eficiência da janela e a rigidez do ar com tamanhos de matriz HVACsolar. A modelagem também permite que você compare a um padrão como LEED, PassiveHouse ou construção padrão por meio de uma classificação HERS, se você estiver interessado em tais comparações, bem como determinar a quantidade de PV que você precisará se desejar ser um zero - Casa de energia. Modelagem também irá dizer-lhe quanto ganhos de energia solar passiva você obterá, bem como se você está recebendo muito no verão. Você também pode obter uma estimativa do seu consumo anual de energia. A modelagem do uso de energia é teoricamente simples, mas praticamente complexa, particularmente em períodos de tempo mais longos. O comportamento do ocupante, como a configuração do termostato, a abertura de janelas eo uso de equipamentos, têm um efeito muito grande na construção de uso de energia. Do mesmo modo, o uso de energia literalmente muda com o tempo. Além disso, adicione o ganho de calor brilhante (veja a discussão na seção R-values), e esse fato de que os valores R da montagem do mundo real variam um pouco dos teóricos (veja a próxima seção). Existem inúmeros pacotes de software que calculam a perda de calor (incluindo alguns livres), alguns dos quais podem fazer modelos muito sofisticados na tentativa de lidar com as complicações práticas. Diferentes programas têm capacidades diferentes, então o melhor depende do que deseja. Mesmo que você compre um pacote, tenha em mente que o uso anual de energia é tão dependente do comportamento dos ocupantes e do tempo, nenhum modelo pode possivelmente prever com precisão o resultado final. 1 Os pacotes mais sofisticados terão mais efeitos em conta e, portanto, obterão uma estimativa melhor, mas você ainda pode obter estimativas úteis com apenas um modelo simples feito em uma planilha. Se a preocupação é apenas o consumo de energia anual de ballpark, ou apenas o uso de energia do pior caso (por exemplo, para dimensionar um sistema de aquecimento de backup) ou o uso típico de energia de um caso, uma planilha simples dará um bom resultado. O resto do documento descreve como fazer isso e explica as limitações dessa abordagem. Perda de calor Infiltração de condução Existem dois métodos principais de perda de calor na construção, a condução através do envelope do edifício (ou seja, a superfície exterior: piso, paredes, telhado, janelas, etc.) e através da infiltração de ar (ou, em vez disso, o ar quente que escapa ao edifício sendo substituído Pelo ar frio do exterior). Outros fatores, como a perda de radiante, realmente só afetam a diferença de temperatura de dentro para fora. Esses fatores podem ser bastante significativos por curtos períodos de tempo e podem afetar significativamente a quantidade anual, mas são ignorados aqui. 2 Perda de calor através do envelope A fórmula geral de perda de calor é: QUA T. Ou em palavras simples, a perda de calor de uma área de tamanho A é determinada pelo valor U dos materiais e pela diferença de temperatura entre o interior e o exterior (ou seja, a diferença de temperatura das duas superfícies e não as duas temperaturas do ar, O que pode não ser o mesmo. Abaixo está um ajuste para as temperaturas do ar.) Para obter a perda de calor de um edifício inteiro, você divide o edifício em áreas que têm o mesmo valor de U e, em seguida, adicione-os todos para obter o total perda de calor. Então, tipicamente, você vai acabar com quatro áreas diferentes: paredes, janelas amplificadores de portas, telhado e chão. Se uma dessas áreas possuísse partes que tenham um valor de U diferente (por exemplo, um corte de parede que seja construído de forma diferente), você acabará por dividir isso em sua própria categoria também. Perda de calor através de uma montagem: porque paredes, telhados, etc. são montagens de materiais diferentes, o cálculo da perda de calor através dessa montagem exige combinar os valores de R dos vários materiais para calcular um valor R efetivo para a montagem. Primeiro, divida a montagem em seções que são uniformes de dentro para fora, por exemplo em uma parede 2x4, há a parte em que o isolamento enche a cavidade e a parte onde há um 2x4 e nenhum isolamento. Em segundo lugar, calcule o valor R de cada seção adicionando os valores R de cada uma de suas camadas. Por exemplo, uma parede típica de 2x4 seria: R.5 (revestimento de madeira) R.5 (revestimento de madeira 12m) R11 (isolamento) R.5 (sheetrock) R12.5. O valor R de um material é encontrado em uma tabela para todo o material (por exemplo, um batente de fibra de vidro R11 com uma espessura de 3,5 m), ou usando o valor R por polegada de material (por exemplo, R3,1 polegadas) e multiplicado pelo real Espessura (R3.1inch3.5inchesR11). Em terceiro lugar, calcule o valor U da montagem como a soma dos valores U ponderados de cada seção. Para fazer isso, você precisará primeiro calcular a porcentagem da área total que cada uma das diferentes seções ocupam. O valor R da montagem é então apenas o inverso do seu valor U. Aqui está um exemplo: A seção de parede de exemplo à direita consiste em duas seções transversais diferentes: (A) onde não há parafusos de 2x4: isolamento em folha de isolamento de folha e revestimento de contraplacado e (B) a seção onde existem parafusos: it39s sheetrock -2x4-isolação-2x4-revestimento de compensação. Neste exemplo, os 2x4 estão separados por 24, o que significa que cada seção de 24 m de parede consiste em 22.5quot da montagem A e 1.5quot da montagem B. O valor R para a seção A é. 6 (sheetrock R1 por polegada) 33.3 (celulose R3.7 por polegada) .5 (madeira compensada R1 por polegada) .5 (barreira de estacionamento: estimativa) R34.9. O valor R para a seção B é. 6 (sheetrock R1 por polegada) 3.5 (2x4) 7.4 (celulose) 3.5 (2x4) .5 (madeira compensada) .5 (tapume) R16. Para obter o valor R da parede completa, somamos os valores de U de cada seção multiplicados por a porcentagem da montagem geral que eles representam e, em seguida, leva o inverso. Para a nossa parede de amostra, a seção A é 94 (ou seja, 22.5m de 24squos) e, portanto, a seção B é 6. A fórmula básica é: onde U x é o valor U de uma seção e P x ​​é a porcentagem da seção de todo o conjunto . Para a nossa parede, U parede (134,9) .94 (116) .06 .0307, ​​que é um valor de R de cerca de 32,5. Para obter um valor R para uma parede (ou qualquer montagem), você deve primeiro adicionar os valores U e, em seguida, tomar o inverso: calcular um valor R por porcentagem da área não dará um resultado correto. Para calcular o valor R, use o seguinte: Em uma parede real, há diferenças significativas a partir desta seção de parede simples, por exemplo, geralmente há parafusos duplos ou triplos nos cantos, há placas de cima e de baixo, cabeçalhos de vários tamanhos no Windows , Bloqueio de incêndio, tomadas elétricas, encanamento, aberturas etc. Um valor preciso exigiria quebrar a parede em cada seção de componente diferente, enquanto um bom estádio seria usar fatores de enquadramento padrão (que é a porcentagem da parede que é um enquadramento sólido ). Os fatores de enquadramento padrão são muito maiores do que a seção de parede da amostra mostrada acima - o intervalo de 12 a 20 - que é duas a três vezes maior do que a amostra acima. Embora isso reduza o valor R da parede inteira, o efeito não é terrível: dobrando o enquadramento para 12 resultados em: (134.9) .88 (116) .12R30.6 Aumentando o enquadramento para 20 resultados em. (134.9) .8 ( 116) .2R28.2 Problemas do mundo real. Há duas complexidades aqui: (1) as condições em que a parede está abaixo são diferentes das premissas por trás dessa equação (principalmente devido à temperatura radiante sendo diferente da temperatura do ar, ver valores R para a discussão completa) e (2) Que o próprio material de isolamento talvez tenha sido degradado seja instalado incorretamente. Ambos podem ser grandes fatores, e a instalação menos do que perfeita é provavelmente mais do que norma do que uma raridade. O isolamento Batt é notoriamente difícil de instalar, de modo que ele enche a cavidade uniformemente. O soprado em preenchimento solto é mais fácil, mas ainda não trivial, e se não for instalado suficientemente denso, ele irá resolver e deixar vazios no topo das paredes. Nesse caso, o valor R final depende do preenchimento que está sendo instalado na densidade especificada. Com preenchimento solto, o valor R aumenta com o aumento da densidade até um ponto, mas diminui depois disso. Em geral, é bastante difícil passar por esse ponto, é exigir a compressão do isolamento. Excluído deste cálculo é o efeito das camadas de ar no interior e exterior da montagem (para uma explicação mais detalhada, veja quotAir Layers quot). Os valores típicos são R.7 para a camada interna de uma parede e R.2 para o exterior. Estas são médias, com premissas construídas sobre as condições que a parede experimenta. Observe que os valores do teto são mais baixos. Como eles são relativamente pequenos em comparação com montagens super isoladas, e porque as montagens do mundo real tendem a ter mais materiais estruturais, mais tubos, mais fios e instalações de isolamento menos do que perfeitas, o valor R da camada de ar é ignorado Em todos os documentos da casa sensível. Perda de calor através de uma laje ou parede do porão O cálculo da perda de calor através de uma laje envolve duas dificuldades significativas: o solo tem um calor específico alto, então aqueça os dois movimentos e seja armazenado à medida que ele se move e, em segundo lugar, a temperatura do solo muda tanto com a estação E com profundidade da superfície (veja o diagrama à direita). Quanto mais longe da superfície, menor o movimento de temperatura, até atingir um ponto (cerca de 3039), onde já não há mudanças. Esta temperatura é aproximadamente igual à temperatura média anual ao ar livre. No inverno, a superfície é mais fria do que a terra profunda, e o verão é mais quente. Na primavera, a superfície aquece mais rápido do que o chão mais profundo, então, quando você vai, o chão ficará mais frio e mais quente novamente. No outono, esse processo é revertido. Complementar ainda mais o problema é que os solos têm diferentes valores de R dependendo da sua composição (areia, argila, rocha) e variam em torno de um R de .5ft a 10ft. Se o solo tiver umidade significativa nele, o valor R será menor - muitas vezes muito menos. Complicações também incluem o ganho de calor solar (o que torna mais quente do que o ar) e a evaporação (o que o torna mais frio). Modelagem da perda de calor da laje Ao contrário da perda de calor acima do solo, parece não haver uma maneira simples de modelar a perda de calor através do solo e muitos modelos bastante complexos foram propostos e incorporados no software de modelagem de energia. Os diagramas abaixo ilustram a questão, mostrando dois possíveis arranjos de distribuição de calor na proximidade de uma laje. O diagrama esquerdo representa uma distribuição ideológica de calor no estado estacionário durante o inverno, assumindo um valor R uniforme em todo o solo (note que, na realidade, o gradiente de temperatura é continuamente variável - é mais fácil de desenhar como passos incrementais). Neste caso, o ponto médio do solo torna-se o ponto médio de temperatura entre o interior e o exterior. As setas pretas indicam a direção do fluxo de calor, que é o caminho mais curto de quente para frio. O diagrama direito mostra uma distribuição de calor diferente, mas ainda possível. Esta distribuição pode ser porque o valor R do solo não é uniforme, ou porque o clima faz uma mudança de longo prazo de quente para frio. Uma vez que a distribuição de calor é diferente, o fluxo de calor também é diferente: caminhos mais longos significa perda de calor mais lenta, uma vez que o valor R do caminho mais longo é maior. Note-se que, nestes diagramas simplificados, a temperatura do solo profundo é assumida como 50F 6. Ambos os diagramas mostram um dia de inverno e, em outras estações, a distribuição de calor longe do edifício será diferente, o que também afetará o caminho de perda de calor comprimento. Na primavera, a superfície aquece antes do chão abaixo, deixando-a como uma fatia de terra fria, de modo que o gradiente vai de quente para frio para o ponto médio, depois volta a aquecer enquanto você se aproxima da laje. A cunha de terra fria aquece de cima e de perda do edifício. No outono o processo inverte-se. Note-se que no verão e em climas quentes o processo inverte e o calor é obtido através da laje em vez de perdido. O mecanismo é o mesmo. A temperatura em profundidade tem um atraso de tempo crescente. Cerca de um pé da temperatura muda muito pouco diariamente, mas mudará em uma semana. A poucos metros do atraso do tempo é mais como um mês, e por dez metros abaixo são alguns meses, e cerca de trinta pés é em torno de um ano. À medida que você desce no solo, o intervalo de tempo não é linear, mas muda exponencialmente mais devagar. Essencialmente, o chão abaixo da laje se torna e a extensão da laje, de modo que o fluxo de calor descendente apenas aumenta o tamanho da bolha de calor e a perda de calor é devido a movimentos laterais. Se não houvesse movimento lateral, a bolha de calor continuaria a se estender para baixo até atingir o limite de calor vindo do núcleo da terra (a Terra aquece cerca de 5deg a cada 30039). O que isso significa é que a perda de calor de cada parte da laje viaja ao longo de caminhos curvos que são definidos pela forma da bolha de calor no interior e a forma da bolha fria no exterior (ou seja, porque o calor vai de quente para Frio), como mostrado pelas setas pretas nos diagramas. A exceção é a borda da laje, onde é exposta ao ar e, portanto, o calor se move diretamente através da borda (seta preta direta). A perda de calor total da laje é a soma do fluxo de calor através de cada um desses caminhos, com os caminhos do centro da laje com valores de R maiores (devido ao movimento através de mais solo) e atrasos de tempo mais longos. Este movimento de lado mudará de acordo com a forma em que a forma e a temperatura da bolha fria mudam, embora com um pouco de atraso de tempo - quanto mais profundo o movimento, mais tempo é o atraso, portanto, em algum momento, o atraso do tempo é tão grande que existe Sem movimento anual nesse caminho. Então, claramente, o caminho direto através da borda da laje e os caminhos da borda externa da laje através das áreas próximas do solo têm o caminho de menor resistência. É por isso que o foco foi no isolamento perimetral: no entanto, a área interior é muito maior que a área perimetral. A perda diretamente através do perímetro então é bastante dependente da temperatura de hoje, mas, ao se afastar da borda imediata, a perda torna-se mais dependente das temperaturas médias de dias, semanas ou mesmo meses atrás. Isso significa que você pode calcular uma perda média com base na temperatura média do solo, ou você precisa incluir um fator que represente a temperatura do solo, que é realmente apenas uma média móvel da temperatura externa durante algum período de tempo. Métodos de cálculo Existem dois métodos comuns: um simples aplicável apenas a estruturas cuja relação entre a área do chão e o comprimento do perímetro seja inferior a 12 (ou seja, pequenos edifícios) que é simples de calcular e a outra é usar o software de modelagem de energia. O software de modelagem de energia pode fazer análises muito sofisticadas e é mais provável que obtenha um resultado exato, mas você precisa comprá-lo e passar tempo aprendendo a usá-lo ou, em alternativa, contratar um profissional de energia para fazer isso por você. Dado que a construção geralmente está em um orçamento apertado, o foco aqui é sobre como fazê-lo você mesmo. O método comum é assumir que a perda diretamente através do perímetro é dominante, e então você pode calcular a perda através da laje usando temperaturas externas e internas. A fórmula é: Onde P é o comprimento do perímetro da laje e F 2 é um fator que depende do tipo de isolação da laje e das condições locais. Esta foi uma simplificação de um modelo ligeiramente mais complexo (e mais preciso) que é: Onde A é a área não isolada debaixo da laje, e F 1 é calculado como F 2. Mas ignorando qualquer perda através do meio da laje, que é contabilizado na equação para 2BtuSF. Para as condições comuns (ou seja, não isoladas ou, no máximo, isolamento periférico R10), a versão F 2 foi julgada suficientemente próxima até bastante recentemente. Observe que F1 e F2 são por comprimento linear em vez de por pé quadrado. O problema óbvio com o modelo F1 é que não assume nenhum isolamento interior. Em vez disso, o foco foi ampliar o modelo F2 para incluir a perda através da laje inteira, e o modelo F2 agora está incorporado em muitos códigos de energia. Você pensou então que os valores estavam disponíveis em toda a web, mas eles não são. A tabela a seguir é baseada no ASHRAE 90.1 (2010), embora provenha da comissão de energia da Califórnia. Valores F, laje não aquecida Essas tabelas, pelo menos, cobrem a maioria das possíveis configurações de isolamento, mas se a configuração da sua configuração não estiver na tabela, o Therm (download gratuito da LBL) pode, aparentemente, calcular um valor para você. Você ainda tem que colocar o tempo aprendendo a usá-lo, então esta pode não ser uma opção atraente. Existem algumas coisas óbvias nessas tabelas: primeiro é fácil chegar ao ponto em que aumentar a espessura do isolamento perimetral, sem também instalá-la em uma tira mais larga, compra pouco. Por exemplo, de R5 a R10 instalado 24m de largura não faz diferença em uma laje não aquecida - é melhor instalar R5 48quot de largura. Da mesma forma, se você apenas colocá-lo em 48m de largura, não há nenhum ponto de passar da R15, e pouco ponto passado R10. Também mostra que a crença convencional de que apenas o isolamento perimetral é importante não é verdade: se você quer grandes reduções na perda de calor de laje, você precisará de isolamento de laje completo. Infelizmente, as versões mais antigas deste gráfico foram construídas sob o pressuposto de que apenas a perda de perímetro importou (bem, se você assume paredes R11 e janelas ruim, então era uma espécie de verdade) e, como resultado, a prática comum é apenas instalar isolamento perimetral. A comparação dos valores de aquecimento e lajes não aquecidas mostra que esses valores não dependem apenas do valor R inerente da situação, mas também da temperatura porque os valores F dos dois gráficos não variam linearmente. Por exemplo, para nenhum isolamento, a laje não aquecida F é 0,73 enquanto está aquecida é 1,35, um aumento de 85, mas para o isolamento de laje completo R30 F passa de 0,213 a 0,296, um aumento de 39. Em comparação, a diferença de temperatura entre Uma laje aquecida e não aquecida é talvez 20F ou menos, então o aumento de T em comparação com as temperaturas do ar do inverno é talvez de 50. Dado que a perda de calor no estado estável geralmente é considerada linear, não está claro o que está acontecendo aqui ou por que você pode Apenas tenha uma mesa e use uma temperatura interior mais alta ao calcular a perda de uma laje aquecida. Embora a fórmula use a temperatura do ar exterior para calcular a perda de calor, usar F fatores não resultará em uma perda de calor precisa para qualquer momento dado - ele só pode calcular as perdas sazonais. Para calcular a perda de uma determinada temperatura externa, você precisa incorporar um termo para a variação sazonal nas temperaturas do solo (tendo em mente que um dia de 20degF no final do outono produzirá muito menos perda de calor e uma no início da primavera) e, como Esta fórmula não tem esse termo, não há como esta fórmula produzirá esse resultado. Houve várias variações sobre esta proposta, com a principal diferença, sendo que há um componente variável no tempo, e muitas vezes também uma divisão da laje em perímetro e laje principal. Primeiro, é modelar a perda de calor como um componente estável e variável, como algo como: Neste modelo Qm é a perda de estado estacionário e Qa é a perda sazonal de perda. Na parte sazonal, quotday é o dia do ano (ou seja, 1 a 365), Dc é o deslocamento a partir de 1 de janeiro do dia mais próximo do equinócio de outono, onde a temperatura média do ar é a mesma que a temperatura média do solo profundo e Dg É o número de dias de intervalo de tempo para a temperatura do solo para o ar. (Ref: 1, 2) Uma vez que o valor UA é fixado para qualquer edifício, o que realmente significa é: onde T in é temperatura interior, T gm é temperatura média do solo e T gs é a variação sazonal da temperatura da média (Nota: esta é a temperatura superficial superficial, o que significa que T gs é negativo no verão) e calculado com o mesmo fator variável que Qa acima. A captura é que a perda através do perímetro dificilmente é afetada pelo solo profundo e a perda através do meio da laje é dificilmente efetuada por variações sazonais da temperatura, então você precisa estimar qual área é o perímetro e qual é a laje central e o que Seus respectivos valores de U são. A dificuldade com a fórmula acima é que não sabemos o tamanho do perímetro da laje (ou seja, a parte dominada pela perda de calor através do caminho curto através da borda) em relação ao resto da laje cuja perda de calor é dominada por uma temperatura mais profunda do solo . Também não conhecemos os valores U relativos para cada área. Ao contrário da perda de calor por outros mecanismos onde são documentados por muitas fontes, eu só consegui encontrar uma fonte que resolveu esse problema usando apenas matemática simples, e isso é adotado a partir dessa (ref 3). Neste método, a laje é caracterizada tendo apenas um valor U, como fazemos por uma parede acima do solo, ou seja: Onde T g é uma temperatura do solo variável no tempo que representa a temperatura média do solo da superfície ao solo profundo e U ef é calculado da seguinte forma (veja o diagrama à direita): Para calcular a área perimetral (Af1), a área interna (Af2) e a resistência ao solo associada Rg1 e Rg2, o software foi usado para calcular 48 configurações diferentes e então os valores foram escolhidos para Que a versão simplificada corresponde à saída do software. Estes valores são: Af111.4 Af287.7 Rg14 Rg216, e assim substituindo nós obtemos: também foi determinado que você poderia aproximar a temperatura do solo (ou seja, a média da superfície para a profundidade) da seguinte maneira: Onde T o é o solo médio anual Temperatura e T oa3 é a temperatura média ao ar livre durante os últimos três meses. Se os resultados obtidos são precisos em qualquer grau razoável, particularmente para condições do solo e climas muito diferentes de Dayton, Ohio, onde a análise foi feita, ou o mais importante para os valores de isolamento e as configurações diferentes dos seus 48 casos de teste é o intuito de qualquer um. Observe que o diagrama para o modelo mostra o isolamento do perímetro externo em vez do interior, o que significa que existe uma ponte térmica potencial através do pé. Observe também que haverá perda de calor da borda da laje no ar onde a laje fica acima do grau. Esta é uma perda de calor adicional para a laje que não está incluída aqui, mas é muito mais fácil de modelar, desde que assumimos que a laje dentro permaneça a temperatura constante, então esta área pode ser modelada exatamente como uma seção de parede: Calcule a área de superfície de A laje exposta ao ar Ap (que é apenas o comprimento perimétrico vezes a altura média exposta da laje (tipicamente 4-6quot), use o valor R do isolamento do perímetro vertical e a diferença de temperatura entre a laje e o exterior. Use a temperatura do ar interior como um substituto para a temperatura da laje, embora, a menos que a laje esteja muito bem isolada, será um pouco mais fria do que a temperatura ambiente. Claro que, se a perda de calor for grande o suficiente, a laje não permanecerá a uma temperatura constante, mas Terminará mais frio do que o ar interior. O seguinte é especulação: Outra possibilidade é modelar a laje como uma série de tiras concêntricas do perímetro (diagrama à direita etiquetado P1, P2, P3 e M). Se as tiras que escolhemos são T O suficiente, então cada um terá um valor U bastante uniforme ao longo de seu caminho de perda de calor dominante. Para manter o cálculo simples, a chave é encontrar o número mínimo de tiras que dá um resultado suficientemente bom (aqui especulado como sendo 4). Em seguida, modelo o solo como uma série de fatias cuja temperatura varia com o tempo. Se assumirmos caminhos aproximadamente curvos para a perda de calor dominante, podemos emparelhar cada tira de laje com uma fatia de solo que está em seu caminho de transferência de calor provavelmente provável. Nós escolhemos o tamanho das fatias de terra pelo seu atraso de tempo aproximado na mudança de temperatura da temperatura exterior. Por simplicidade, as fatias foram divididas aqui em Tg0: dependendo dos últimos dias-semana de temperatura externa, Tg1: dependendo da temperatura dos últimos meses, Tg3: dependendo da temperatura do último 3 meses e Tg a temperatura média do solo profundo. O truque então para adivinhar o que a largura para P1, P2 e P3 deve ser. Um julgamento rápido definiu a largura de P1239, P2239, P3439, mas isso é apenas um palpite (eu escolhi isso porque o amplificador P1 P2 corresponde às larguras de isolamento típicas do perímetro e P3 tem que ser mais largo porque Tg3 é mais profundo). A limitação aqui é que se nós escolhemos as larguras erradas para as tiras de laje, sua perda de calor dominante não se alinha com a fatia de chão correspondente, e, portanto, os pressupostos do modelo agora estarão incorretos. Em seguida, você deve fazer uma suposição sobre o valor R do solo, que pode variar de R.4ft para R10ft dependendo da densidade do solo e do teor de umidade. Uma escolha segura parece ter cerca de 3 em climas secos, 1,5 em climas wethumid talvez apenas 1 no inverno molhe climas como o noroeste pacífico. Para cada tira, calcule um comprimento médio do percurso para a superfície (ou seja, ao longo do comprimento do percurso curvo) para a fatia de solo correspondente. O mais simples é assumir que as fatias diretas através do solo são afetadas pela temperatura externa e que a distribuição de calor sob a laje formou uma bolha, então o comprimento do caminho é um círculo de 14, ou mais provável 14 de uma elipse, mas nós Provavelmente pode modelá-lo como uma linha reta e isso seria suficientemente preciso. O pressuposto provável é que a temperatura em profundidade diretamente abaixo do exterior do prédio depende apenas das temperaturas históricas do ar exterior. Para calcular a perda de calor, então, você calcula a área para cada tira de laje, que é então subtraída do total para lhe dar a área restante. Calcule o valor de R para o comprimento do caminho para cada tira pelo seu comprimento médio, alguns vezes o valor de R para o solo. Em seguida, construa uma série de temperaturas dependentes do tempo para cada mês do ano e isso permite que você crie uma tabela simples de valores para Tg0, Tg1 e Tg3 para cada mês. Use o que for mais como a maior perda de calor para dimensionar o equipamento HVAC e para calcular a perda da estação de aquecimento, basta somar os meses da estação de aquecimento. A fórmula seria: Onde P é o comprimento do perímetro, A v é a altura vertical da borda da laje exposta ao ar, T in é a temperatura interna (e a temperatura de laje assumida), Tout é a temperatura do ar exterior, R p é o Isolação de laje vertical, R s1. R s2. R s3. E R s4 são as quantidades de isolamento de sub-lajes horizontais para cada uma das tiras de laje correspondentes, e a área M é a parte restante da laje que não é uma das tiras perimétricas. R g0 através de R g3 são os valores de R para o solo para cada caminho de perda de calor dominante, onde cada um deles é K g L, onde K g é o valor R por pé de solo e L é o comprimento do caminho, ou seja, A distância que o calor viaja. O comprimento do caminho L, pode ser estimado assumindo que o caminho é a hipotenusa de um triângulo direito isósceles (distância do isolamento vertical ao centro da tira1.414). Isto seria verdade se as tiras de solo correspondentes a T g0. T g1 etc são a mesma profundidade que as tiras de laje são largas. Embora seja certo que esta é uma estimativa bastante crua, a chave aqui é que as distâncias são suficientemente curtas para que se tratarmos o comprimento do caminho como um círculo 14, um arco 14 ou outros triângulos ganhadores resultaram em distâncias dramaticamente diferentes - a diferença entre eles É talvez apenas 20 assumindo que o comprimento do caminho do parque de bola não é dramaticamente errado. Assim, com esta suposição, R g0 K g 1.414 R g1 K g 31.414 R g3 K g 61.414. O valor para Kg será entre 0,5 e 10, onde .5 seria para solo e rocha muito úmidos e 10 seriam solo seco e solto. Os valores de 1-2 parecem bastante típicos. T g. T g0. T g1 e T g3 são as temperaturas do solo para as fatias de solo. A idéia aqui é correlacionar essas temperaturas com a temperatura externa atual e temperaturas externas passadas recentes. Tal como acontece com a laje, a idéia é fazer fatias de aproximadamente as mesmas propriedades. Para a fatia superior, use a média da temperatura externa e a temperatura mensal média para o mês de interesse: T g0 (T ou t-T m) 2. Para a próxima fatia, use a temperatura média mensal e, para a próxima fatia, use a temperatura média nos últimos três meses e, finalmente, para o solo mais profundo, use a temperatura média nos últimos seis meses. Embora este seja um julgamento selvagem, sabemos que, à medida que avançamos, o balanço de temperatura é menor, e o atraso do tempo é maior, portanto, os pressupostos seguem o padrão, mesmo que estejam fora. Este modelo claramente tem muitos pressupostos, além dos relativos ao comprimento do caminho e ao tamanho das fatias de terra combinando com o tamanho das tiras de laje, pressupõe que a temperatura do solo abaixo da superfície exterior é apenas Dependendo da temperatura externa e não modelar qualquer movimento de calor horizontal dentro da própria laje (por exemplo, de P2 a P1) e assume que a temperatura da laje é uniforme e a mesma temperatura do ar interior. Claramente, todos esses pressupostos provavelmente são errados, mas as perguntas são se o resultado líquido está próximo o suficiente. Uma vez que este modelo calculará diferentes quantidades de perda de calor de laje para diferentes épocas do ano, dada a temperatura do ar exterior idêntica, não pode ser comparada diretamente com o fator F que depende apenas da temperatura externa. Para comparar, eu calculo uma perda de calor mensal com temperatura média ao ar livre para cada mês da estação de aquecimento, e então tirei a média disso (o resultado líquido é uma perda média de calor por hora para toda a laje). Para o fator F, usei a temperatura média no inverno ao ar livre. Eu assumi que o chão era R2ft e modelava uma lajeira de 25x40 (1000sf) (comprimento perimetral130ft) usando dados meteorológicos de Seattle (estação de aquecimento de outubro a abril). Em seguida, comparei as perdas de calor para: sem isolação, 24quot R10, 48quot R10, laje cheia R10 e laje completa R40. Como pode ser visto na tabela abaixo, o modelo de fatia prevê perdas muito maiores por falta de isolamento, mas semelhante a menores perdas para outras configurações. Minha conclusão geral é que o modelo de fatia é muito mais sensível às quantidades de isolamento perimetral e que, para maiores quantidades de isolamento, o modelo de fatia dá menores perdas de calor para temperaturas exteriores abaixo da temperatura do solo. O que seria mais interessante é comparar esses resultados com ISO13370, que aparentemente é o modelo usado pelo PHPP. Infelizmente, são 200 apenas para obter o padrão escrito (embora eu encontrei um resumo e, infelizmente, a fórmula é altamente obtusa.) Queda da perda de modelo Quadros: são semelhantes às lajes, mas com uma superfície vertical maior e, portanto, maior perda de calor . Espaços de rastreamento: Existem dois tipos de espaços de rastreamento: aquecidos e não aquecidos, onde o tipo aquecido são essencialmente porões que são muito úteis. Os espaços de rastreamento não aquecidos geralmente são ventilados, embora a quantidade de movimento do ar geralmente seja bastante mínima (embora não seja tão mínima que, em um clima, a condensação de umidade do verão se formará no chão frio e causará o crescimento do molde). No entanto, não é ventilado tanto que alcançará a temperatura exterior e, portanto, a perda de calor no chão será menor porque a diferença de temperatura será menor. Em vez de modelar com temperatura externa, ajuste-a mais perto da temperatura média do solo. Perda de calor através da infiltração Além da perda de calor através do envelope por meio de condução, todos os edifícios vazam ar e este mecanismo é descrito em detalhes na seção de infiltração. At issue is how to use these measured CFM50ACH50 numbers to calculate the hourly heat loss for some typical or maximum actual conditions the building will experience. Ideally we39d like a table or formula that would allow us to know the value of ACH at various temperatures so that we could get a more accurate total heat loss for any outdoor temperature, but because wind speed is typically such an important component of ACH, its really impractical to do this hence we use the estimates. The issue here is that we know that the infiltration rate is higher when its cold and windy, so unless the typical weather conditions are that it39s windier when the temperatures are moderate than cold, ACH nat will result in under estimating the heat loss due to infiltration. If you39re calculating the worst case heat loss, for example for equipment sizing, rather than using ACH nat in the heat loss formula, you might want to use ACH5010 or even ACH505, which are both just different fudge factors than those used to calculate ACH nat - it really depends on your climate and whether you think ACH nat is a good estimate of infiltration at cold temperatures or not 3 . Once you39ve determined an infiltration rate, the heat loss is calculated via one of the following simple formulas: Where Q is the hourly infiltration heat loss, V is the volume of the house. 018 is the heat capacity of air whose units are BTUft 3 - F, T is the difference in temperature and ACH n and CFM n are the normalized blower door test values for whatever conditions you want to assume the typical assumption being ACH nat . ie the adjusted valve based on the statistical model representing the quotnaturalquot ventilation rate. Note that the value 1.08 is just the heat capacity of air. 018 times 60, since CFM is a per minute rate and we39re looking for a per hour rate. Intuitively this number represents the amount of heat containing the air that leaks out, or more appropriately the amount of heat required to heat up the air that leaks in as a result of air leaking out. There is some evidence that as air leaks out thru an insulated cavity, the cavity acts a bit like an HRV, but given that you don39t really want air leaking thru an insulated cavity, especially not at a slow rate where condensation can happen, its best not to assume this happens. If you have mechanical ventilation, the calculation is essential the same, but in this case the ventilation rate is whatever the fan is rated for. If the ventilation is a HRV or ERV you need to adjust the temperature difference by the efficiency of heat recovery, so for example if T is 50 degrees and the efficiency is 70, then the effective temperature difference is only 30 of T, or 15 degrees in this case. Heat Loss Calculation Example The following is an example house, to show a complete heat loss calculation. This house is 2539x4039 (1000SF) on the interior with an 839 ceiling, is built with the double stud wall shown in the example above, double glaze low-E windows (U .3), R5 doors, and has a unheated crawlspace. The doors are standard 3-0x6-8 east, west and north windows are 2x3, and south are 7x5. To simplify things, rather than using F values for floor loss, it is assumed an un-vented crawlspace has an average temperature 4 of around 55degF. Assume the floor and the ceiling are built with 12quot TGIs (or equivalent), and that the insulation is blown in cellulose (R3.7inch - or equivalent).The following are the east, south, west and north elevations for this hypothetical house (which is simplified to make the calculations easy: its intended to be realistic enough, but it isn39t a real house). Assume that the house measured 2ACH50, which corresponds to a .2ACH natural ventilation rate (which in turn is about 27CFM). Because this is quite small, assume another 25CFM mechanical ventilation. For this example, we assume the house is in a moderate climate, with an average heating season temperature of around 40degF, and typical coldest night is around 20degF. The heat loss at typical temperature will help calculate an approximation of what percentage of the necessary heat can be supplied with passive solar, while the heat loss at the coldest day will help size the backup heating equipment. Local codes specify this typical cold temperature, usually called the design temperature . which for Seattle is 23degF. You can change the values in the calculation to whatever you39d like if the assumptions made here are different than what you39d like to look at, but beware there is no consistency checking, so if you enter bogus data, you39ll get bogus results and also the code will only run in a fairly recent browser. The spreadsheet is updated every time you change on of the values. All of the Btu values are per hour. To get a daily amount, use the average daily temperature to get an hourly loss, then multiply by 24. Because this house is super-insulated, and quite small, these values for heat loss are very low compared to typical heating systems, whose maximum output is more in the 40,000 to 80,000 Btuhr range. For a more fair comparison, we should size the heat for the most extreme cold day, say 0degF, but even then this house still uses only 11,000 Btuhr. If we want to reduce the heat loss from this building, it obvious that putting in better windows (currently 28 of the total loss), reducing infiltration (18 of total) and using an HRV (ventilation currently 17) would be the places to look. The caveat is that if our model has significant inaccuracies, then we won39t get the savings we39d expect. To see how solar affects your building, see the passive solar version of the calculator. Yearly heat lossAccuracy of the model The heat loss model described here is for steady state heat loss under ideal conditions. In the real world, these conditions are at least as uncommon as they are common (detailed discussion in the section on r-values ). How much the actual loss will vary from the modeled loss is unclear. There are really only two significant factors: the sunny surfaces of the building will likely have a lower heat loss due to radiant gain, and roof heat loss will be higher when the night sky is clear and dry. Under a cloudy sky at night, the model should be pretty accurate, and on a cloudy day it will also be relatively close (although there will still be some radiant effect), but at all other times real loss will vary from the model. While the heat transfer on a given day isn39t likely to be much different from what is calculated, a small error in the daily amount will result in a significant error in the yearly amount. In addition any given years weather will vary from the average, so the annual heat loss calculation should be viewed as a ballpark estimate. Finally nightime setback (or for that matter any setback) in the thermostat setting may change your heat loss. Still, the yearly heat loss calculation can be used to compare one building to another fairly accurately, and will still provide a decent estimate of annual energy use (although you will have to factor in internal gain and solar gain, see the passive solar section). To estimate the yearly heating and cooling energy, you calculate the building39s heat loss per degree (ie just QAU ), multiply by 24 to convert hourly loss to daily, then multiply by the number of degree day for your location (for a discussion on degree days and its limitations, see the units section). So, for example if the example house from above is located in a 4000HDD climate, its seasonal loss will 174.1244000, or about 16.7millon BTU. The cooling energy is calculated the same only using the CDD number instead. The other catch is that if that buildings are usually kept at 68 to 70F, not 65F, so the actual heat loss is likely more than 16.7million BTU. HHD numbers for indoor temperatures other than 65F are available, but my experiments indicates they give too large a result--at least for Seattle--and I39m assuming its because almost every day in Seattle has an average temperature below 70F, yet clearly many of them are close enough to 70 (and with enough sunshine) that no external energy is used--yet you can39t weight the solar gain against the heat loss because its likely to be greater than the heat loss, and the windows are likely to be open to exhaust all the extra gain. 5 Note also that real world buildings are much more complex than the simple model presented in the above example: the R value of a wall varies by how much lumber is actually in it some wall sections often end up getting built differently than others pipes, ducts and other voids reduce R-value to less than the nominal value and most buildings have quite a few more than four wall surfaces. The more accurately all these things are accounted for, the more accurate the result will be. 1: In particular, when upgrading to super-insulation, there is some tendency to keep a house warmer than it would have previously been, so the net savings is sometimes smaller than expected. 2: No source I could find dealt with these factors at all, nor could I find any data to indicate how big they are. The typical response is to just put more insulation in the attic for instance, because the summer roof is often hotter than air temp, and the winter roof is often colder. 3: My take is that it doesn39t in many climates, simply because winters often seem windier, but even if that not the case, stack effect is clearly greater when its cold, so taking an average will underestimate heat loss when its cold, and over estimate when its not. But because the formula multiplies by temperature difference, the cold underestimate will be greater than the warm overestimate, leaving a bias. At least that39s the thought. 4: a vented crawlspace will presumably have a lower temperature during the heating season. There are more complex methods of calculating this downward loss, but because we are only interested in a ballpark result, this simplification is probably reasonable. As an example of using F-factors, if instead we assumed the house was slab on grade with full R10 insulation, we would find the F factor from the table of .36 is the slab if unheated and .55 if it is heated. Since the perimeter of the house is 130ft, this gives a lossdegF of between 46.8Btu amp 71.5Btu. Compared to the crawl space R48 version, that39s significantly more heat loss: in order to get the same heat loss (based on F factors), you39re need an F of .162, which according to the table is R55. Although that would clearly indicate that the F factors in the table aren39t accurate for all situations, it still implies that R10 sub slab insulation is not that much. 5: I39ve spent a bunch of time trying to make my model of the Seattle house match the energy use I actually see--unfortunately we have a gas stove, a gas dryer and gas hot water, so there is extra work in separating those out from heating energy. Using HDD (65F), I get a loss of 45mBTUyr, which is the actual, but I also calculate that I have 10mBTU of internal gain (electrical load) and anywhere from 7-12mBTU of solar gain. When I tried HDD (70) I got a loss of more like 60mBTU, which is too high. but then the house is typically at 68F Either there is a sizeable error in my model, or HDD just doesn39t give a good result. While my model does likely have errors, I39m convinced HDD doesn39t give that good a result. See this article energylensarticlesdegree-days for a detailed discussion. In particular HDD (65F) underestimated because the building is typically warmer, and any other HDD values are too large because the model assumptions are wrong when the temperature is over 60F outside--the heat is usually off, and an excess night heat loss just results in indoor temperatures going below 68F, then climbing back to it or above during the day. 6: these numbers are typical of much of the central US, and are for example only. In the southern US, 60-65 would be more typical, in the north 45 would be typical. References 1: Algorithms for Slab on Grade Heat Transfer Calculations, William P. Bahnfleth, JoAnn Amber, 1991 2: Simplified Method for Underground Heat Transfer Calculation, Sangho Choi, Moncef Krarti, University of Colorado 3: Energy Efficient Buildings, Floors and Basements, John Kissock, University of Dayton OhioWelcome to Antapex. EPR steering in Quantum Mechanics (QM). Version: 10 januari, 2017. Some ideas on Quantum Entanglement and non-locality were re-discovered the last 15 or 20 years (or so), mainly based on ideas of Schrodinger on EPR steering, which were expressed in 1935. There indeed exists a subtle difference between, what we describe as entanglement, Bell non-locality, and Steering. I like to say something on those subjects, since its absolutely facinating stuff. So, in case you rather unfamiliar with such subjects, this note might be of interest. The first four chapters will describe some well-know effects of entanglement, that traditionally led to the socalled EPR paradox. So, these first four chapters are a bit old-skool, I think. While after many recent efforts were, and are, spend on finding the essence of steering . entanglement , and nonlocality . it now seems that the views that were deveoped in the years before the 90s, probably needed quite some revision, especially due to all the research since the 2000s. However, the first four chapters will present the (pre 90s) old-skool ideas first, since this way probably still remains the best practice to present such material. In chapter 5, I will try to decribe a few specifics of EPR steering, but it will be of a very lightweight nature, and can only be of interest if you are really unfamiliar with the subject. Inevitably, a problem pertinent to any interpretation of QM must be addressed: In chapter 6, I will try to say something useful on the measurement problem, and the role of the observer. Lastly, in chapter 7, I like to touch upon some quite radical ideas on the interpretation of QM, namely some new parallel theories like MIW (not Everetts MWI), and related theories, and some other theoretical studies on SpaceTime, on the smallest scale possible. Lets try to see what this is about: 1. Introduction. 1.1 Some background information. We know that it is not allowed for information, to travel faster than c. However, there exists certain situations in Quantum Mechanics (QM), where it appears that this rule is broken. I immediately haste to say, that virtually all physicists believe that the rule still holds, but that something else is at work. What that something else precisely is, is not fully clear yet, although some well-funded ideas, do exist. QM uses several flavours to mathematically describe entities (e. g. a particle), properties (e. g. position, spin), and events. One such flavour which is often used is the Dirac (vector) notation. Suppose we have a certain observable (propery) of a particle. Suppose that this observable indeed can be measured by some measuring device. For the purpose of this sort of text, the spin of a particle is often used as the characteristic observable. This spin, resembles an angular magnetic momentum, and can be either be up (or often written as or 1), or down (or written as - or 0), when measured along a certain direction (like for example the z-axis in R 3 . Most physicists agree on the fact that the framework QM is intrinsically probabilistic. It means that if the spin of a particle is unmeasured, the spin is a linear combination of both up and down at the same time . Actually, it resembles a vector in 2 dimensional space (actually a 3D Bloch sphere), so in general, such a state can be written as: 966 a1 b0 (equation 1) ( Note: arguments can be found to speak of a 3D Bloch sphere, but we dont mind about this, at this moment. ) where 1 and 0 represents the basis vectors of such superposition. This is indeed remarkable by itself But this is how it fits in the framework of QM. There is a certain probability of of finding 9661 and a certain probability of finding 9660 when a measurement is done. For those probabilities, it must ho ld that a 2 b 2 1, since the total of the probabilities must add up to one. By the way, the system described in equation 1, is often called a qubit, as the Quantum Mechanical bit in Quantum Computing. Similarly, a qutrit can be written as a linear combination of 0 and 1 and 2, which are three orthogonal basis states. It all really looks like it is in vector calculus. The qutrit might thus be represented by: 966 a0 b1 c2 (equation 2) However, in most discussions, the qubit as in equation 1, plays a central role. Now, suppose we have two non-interacting systems of two qubits966 1 and 966 2 (close together). Then their combined state, or product state (that is: when they are not entangled), might be expressed by: 936 966 1 8855 966 2 a 00 00 a 01 01 a 10 10 a 11 11 (equation 3) Such a state is also called seperable, because the combined state is a product of the individual states. If you have such a product state, its possible to factor out (or seperate) each individual system from the combined equation. Note: When we say of equation 1, 966 a1 b0, that it means that it is a linear combination of both up and down at the same time . thus simultaneously, then this statement is a common interpretation in QM. Most folks do not question this interpretation . however, some folks still do. In some cases, such an equation as equation 3, does not work for a combined system of particles. In such case, the particles are fully intertwined, and in such a way, that a measurement on one, affects the state of the second one . The latter statement is extremely remarkable, and is what people nowadays would call steering, as a special subset of the more general term entanglement . Suppose we start out with a quantum system with spin 0. Now suppose further that it decays in two particles. Since the total spin was 0, it must be true that the sum of the spins of the new particles is zero too. But, we cannot say that one must have spin up, and the other must have spin down. However, we can say that their combination carries zero spin. It may appear strange, but a good way to denote the former statement, is by the following equation: 936 187302. ( 01 10 ) (equation 4) Note that this is a superpostion of two states, namely 01 and 10. In QM talk, we say that we have a probability of 189 to measure 01 for both particles, and likewise, a probability of 189 to measure 10 for both particles. That is, after measuerement. Note that an expression as 10 actually seems to say that one particle is found to be up, while the other is found to be down. But the superposition means, that both particles can be in any state, at the same time. Before measurement, we simply do not know. We only know that 936 is 936. 1.2. Describing the strange case. The following case uses two persons, namely Alice and Bob, each at seperate remote locations. Each have one member particle, of an entangled system (like equation 4), in their labs. The description of the case below, is not without critism: - One important question is, if one would consider the case using a pure states interpretation, or a mixed state interpretation. Modern QM insights says that this is very important. - Furthermore, as the spacetime seperation increases, some may also express doubts as to how real (or effective) a decription as equation 3 remains to be true. However, this argument is rather weak. Entanglement is really quite established, and confirmed, even over large distances. Only dissipation of the entangled state, due to interactions with the environment (decoherence), might weaken or destroy the entanglement. - Also, one may argue that both Alice or Bob will simply measure up or down for their member particle, and no strings attached. A down to Earth vision states that, nothing what Bob does, or what Alice does (or measures), will change a thing on their private measurements. One might suspect, that only if Alice informs Bob, or the other way around, one might find correlations. Such viewpoints probably complicates on how to interpret the results. However, the steering of Alices findings, on Bobs member particle, is believed to be true, since experimental results support this view. Whatever is true. or what we need to be careful of, I like to present the case in its original form. However, it will be a simplification of the original idea, and later experimental setups. Fortunately, it is quite accepted to present the case this way. Look again at equation 4. Both states, 01 and 10, seem to have an equal probability to be true or found, after an measurement has been performed. If any measurement is performed, the state reduces to 01 or 10, independend of any distance. This is the heart of the apparant problem. Note: The whole system seems to be pure, in a superposition, while each term seems to be mixed. The differences will be touched upon in chapter 3 (on pure - and mixed states). Suppose we have an etangled system again, which can be described by equation 4. Before we do any measurement, suppose we have a way to seperate the two particles. Lets say that the distance seperating the particles, gets really large. Alice is in location 1, where particle 1 is moving to, and Bob is in location 2, where particle 2 just arrived. Now, Alice performs a measurement to find the spin of particle 1. The amazing thing is, that if she measures up along a certain axis, then Bob must find down at the same axis. Do not think too lightly on this. We started out by saying that (in QM language), both states, 10 and 01, have an equal probability to be true. The total state, is always a superposition of 01 and 10 where each have an equal probability. How does particle 2 knows, that particle 1 was found by Alice, to be in the 1 state, so that particle 2 now knows that it must be 0 You might say that particle 1 quickly informs particle 2 of the state of affairs. But this gets very weird if the distance between both particles is so large, that only a signal (of some sort) faster than the speed of light, is involved. That is quite absurd ofcourse. Note: Equation 4 is not a socalled mixed state. Its a superposition, and a pure state. The probabilities calculated with mixed state, go a little different compared to true pure states. The paradox was first concieved by Einstein and a few colleques (1935), who initially believed to be dealing with a mixed state. See chapter 3 for a comparison between mixed - and pure states. The apparent paradox is that a measurement on either of the particles seems to collapse the state 936 187302. ( 01 10 ), thus of the entire entangled system, into 01 or 10. But the superposition (equation 4) was in effect, all the time . Why that collapse, which always determines the state of the second particle In effect, if you observe one particle along some measurement axis, then the other one is always found to be the opposite. This seems to happen instantaneously . for which we have no classical explanation. The effect has been experimentally confirmed, by Stuart Freedman (et al) in the early 70s, and quite famous are are the Aspects experiments of the early 80s. However, since the experiments were statistical of character, they were not fully loophole free. Later more on this. By the way, the first loophole free experiments were done in 2015 (Delft), almost conclusively confirming the strange effect as described above. In this setting, it really looks as if Alice steers what Bob can find. One (temporary) explanation with a certain consensus among physicists: It would not be good to keep the apparant paradox, fully open, at this point. Its true that much details must be worked out further, since all descriptions above, are presented in a very simple and incomplete manner. If the distance between Alice and Bob is sufficiently large, then if you would assume that the first measured particle, informs the second particle on which state it must take, then such signal must go faster than the speed of light. This is quite is unacceptable, for most physicists. In the thirties of the former century, several models were proposed (Einstein: see chapter 2), of which the (Local) Hidden variables theory was the most prominent one. Essentially it is this: At the moment the entangled pair (as in section 1.2) is created, a hidden contract exists which fully specify their behaviours in what only seems to be non-local events . Its only due to our lack of knowledge of those hidden variables, which make us think of a spooky action at a distance. In a way, this hypothesis is a return to local realism. A Modern understanding: A modern understanding lies in the superpostion of the entangled state as expressed by equation 4. Alice may measure her qubit, and she finds either up or down, each with a 50 probability. She knows nothing about Bobs measurement, if he indeed did that at all. This modern interpretation then says, that Alice does not know for certainty what Bobs finding is, or will be . unless Bobs does his measurement at his member particle (along the same direction). Now, the magic actually sits in the words unless Bobs does his measurement . which also implies that Alice and Bob (at a later time) compare their results. This magic thus sits in the entanglement, or non-locality, where both terms are rather similar, when considering pure states. Now, researchers are still faced with the astonishing inner workings of entanglement, non-locality, and steering, of which I hope that this simple note can shed some light on. First, we need some more information on the historical setting, and the spooky action at a distance, as it was percieved in the 30s, and even up to the 90s of the former century. So, there is a strange effect, but its really the fact of entanglement, and non-locality (and steering), which do not always behave as we know from classical physics. 1.3 EPR and possible alternatives. Quite a few famous scientists contributed to QM, roughly in the period 1890-1940. Ofcourse, also in later decades, countless refinements and discoveries took place. However, the original basic fundaments of QM were laid in the forementioned period. Eistein contributed massively as well. My impression is, that his original positivism towards the theory, slowly diminished to a certain extend, and mainly in the field of the interpretation of QM, and more importantly, to the question as to which extend the theory of QM truly represents reality. Together with a few colleques, in 1935, he published his famous EPR article: Can A Quantum-Mechanical Description of Physical Reality Be Considered Complete (1935). There are many places where you can find this classical article, for example: Even in this title you can already see some important themes which occupied Einstein: Physical Reality and Complete. There are several circumstances and theoretical QM descriptions, which troubled Einstein. Here, I like to describe (in a few words), the following four themes: (1): As an example of Einsteins doubts, may serve the determination of position and momentum, which are quite mundane properties in the Classical world. However, with socalled Quantum Mechanical non-commuting observables, it is not possible to measure (or observe) them simultaneously with unlimited precision. This is especially fairly quickly to deduce, using a wave-function notation of a particle. It is also expressed by one of Heisenbergs uncertainty principles: 916 p 916 v 189 8463 This relation actually says, that if you are able to measure the velocity (v) very precisely, then the momentum (p) will be (automatically) very imprecise. E vice versa. These sort of results of QM, made Einstein (quite rightfully) to question, as to how much reality we can attribute to these results of QM. (2:) Then we have the problem of local realism too. In a classical view, local realism is only natural. For example, if two billiard balls collide, then thats an action which causes momentum to be exchanged between those particles. As another example: a charged particle in an electric Field, notices the local effect of that field, and it may influence its velocity. As we have seen in section 1.2, the measurement of one particle of an entangled pair, seem to directly (instantaneously) have an effect on the measurement the other particle, even if the distance is so large that the speed of light cannot convey information of the first particle to the second one, in time. This is an example of non-locality. Many others had strong reservations to non-locality too. Quite a few conservative (in some respects) hypothesises emerged, most notably the Hidden variables theory. In a nutshell it means this: At the moment the entangled pair (as in section 1.2) is created, a hidden contract exists which fully specify their behaviours in what only seems to be non-local events . Its only due to our lack of knowledge of those hidden variables, which make us think of a spooky action at a distance. In a way, this hypothesis is a return to local realism. I must say that some alternatives to the Hidden variables existed too. In 1964, the physicist John Stewart Bell proposed his Bell inequality, which is a mathematical derivation which, in principle, would make it possible if a local realistic theory could produce the same results as QM. The Bell theorem was revised at a later moment, making it even a stronger argument for a conclusive test. Although Bells theorem is not controversial among physicists, still a few have reservations. The revised Bell inequality have indeed be put to the test in various experiments, in favour of QM. These tests seem to invalidate Local theories, like the Local Hidden variables, and promote the non-local features of QM. (3): The EPR authors also have some serious doubts on how to handle an entangled system, such a described above in 1.2. For example, if Alice would like to change the set of basic vectors, how would it affect Bobs system (4): This theme is again about an entangled system. This time, the EPR authors considered entanglement mainly with respect to position and momentum. According to QM, both observables cannot be sharply observed simultaneously. The authors then provide arguments as to why QM fails to give a complete description of reality. Given the fact that QM was fairly new at that time, it seems to be a quite understandable viewpoint, although various physicists strongly disagreed with those arguments. As of the 90s, it seems to me that more and more people started to doubt the argumentation of the EPR authors, partly due to newer insights or theoretical developments. But, as already mentioned, also in the period of the 30s, some physicists fundamentally disagreed with Einsteins views (like for example Bohr). Before we go to EPR steering and some other great proposals, lets take a look at a nice example which has hit the spotlights the last few decades, namely Quantum Teleportation. I really do not have a particular reason for this example. But it exhibits strong characteritics of non-locality, and something which many folks call the EPR channel. And amazingly, we will see that we need to use classical bits and a classical channel too 1.4 Quantum Teleportation (QT). The following classical article, published in 1993: Teleporting an Unknown Quantum State via Dual Classical and EinsteinPodolskyRosen Channels (1993) by Charles H. Bennett, Gilles Brassard, Claude Crpeau, Richard Jozsa, Asher Peres, and William K. Wootters. really started to set the QT train in motion. Quantum Teleportation is not about the teleportation of matter, like for example a particle. Its about teleporting the information which we can associate with that particle, like the state of its spin. For example, the state of the system described by equation 1 above. A collarly of Quantum Information Theory says, that unknown Quantum Information cannot be cloned. This means that if you would succeed in teleporting Quantum Information to another location, the original information is lost . This is also often referred to as the no-cloning theorem. It might seem rather bizar, since in the classical world, many examples exists where you can simply copy unknown information to another location (e. g. copying the content of a computer register, to another computer). In QM, its actually not so bizar, because if you look at equation 1 again, you see an example of an unknow state. Its also often called a qubit as the QM representative of a classical bit. Unmeasured, it is a superposition of the basis states 0 and 1, using coefficients a and b. Indeed, unmeasured, we do not know this state. If you would like to copy it, you must interact with it, meaning that in fact you are observing it (or measuring it) . which means that it flips into one of its basis states. So, it would fail. Hence, the no-cloning theorem of unknown information. Note that if you would try to (stronly) interact with a qubit, it collapses (or flips) from the superpostion into one of the basis states. Instead of the small talk above, you can also formally work with an Operator on the qubit, which tries to copy it, and then it gets proven that it cant be done. One of the latest records in achieved distances, over which Quantum Teleportation succeeded, is about 150 km. What is it, and how does an experimental might look like Again, we have Alice and Bob. Alice is in Lab1, and Bob is in Lab2, which is about 100km away from Alice. Suppose Alice is able to create an entangled 2 particle system, with respect to the spin. So, the state might be written as 936 187302 ( 01 10 ), just like equation 4 above. Its very important to realize, that we need this equation (equation 4) to describe both particles , just as if they are melted into one entity. As a side remark, I like to mention that actually four of such (Bell) states would be possible, namely: 936 1 187302 ( 00 11 ) 936 2 187302 ( 00 - 11 ) 936 3 187302 ( 01 10 ) 936 4 187302 ( 01 - 10 ) In the experiment below, we can use any of those, to describe an entangled pair in our experiment. Now, lets return to the experimental setup of Alice and Bob. Lets call the particle which Alice claims, particle 2, and which Bob claims particle 3. Why not 1 and 2 Well, in a minute, a third particle will be introduced. I like to call that particle 1. This new particle (particle 1), is the particle which state will be teleported to Bobs location. At this moment, only the entangled particles 2 and 3, are both at Alices location. Next, we move particle 3 to Bobs location. The particles 2 and 3, remain entangled, so they stay strongly correlated. After a short while, particle 3 arrived at Bobs Lab. Next, a new particle (particle 1), a qubit, is introduced at Alices location. In the picture below, you see the actions above, be represented by the subfigures 1, 2, and 3. The particles 2 and 3, are ofcourse still entangled. This situation, or non-local property, is often also expressed (or labeled) as an EPR channel between the particles. This is presumably not to be understood as a real channel between the particles, like in the sense of a channel in the classical world. In chapter 2, we try to see what physicists are suggesting today, of which physical principles may be the source for the EPR channelnon-locality phenomenon. Lets return to the experimental setup again. Suppose we have the following: - The entangled particles, Particles 2 and 3, are collectively described by: - The newly introduced particle, Particle 1 (a qubit) is decribed like we already saw in equation 1, thus by: Also note the subscripts, which may help in distinguishing the particles. At a certain moment, when particles 1 and 2 are really close, (as in subfigure 4 of the figure above), we have a 3 particle system, which have to be described using a product state . as in: 952 123 966 1 8855 Psi 2,3 (equation 5) Such a product state, does not imply a strong measurement or interaction, so the entanglement still holds. Remember, we are still in the situation as depicted in subfigure 4 of the figure above. We now try to rewrite our product state in a more convienient way. If the product is expanded, and some some re-arrangements are done, we get an interresting endresult. Its quite a bit math, and does not add value to our understanding, I think, so I will represent this endresult in a sort of pseudo Ket equation: Note the factor Phi 12 . We have managed to factor out the state of particles 1 and 2 into the Phi 12 term. At the same time, the state of particle 3 looks like a superpostion of four qubit states. Indeed. Actually, it is a superposition. Now, Alice performs a masurement on particle 1 and particle 2. For example, she uses a laser, or EM radiation to alter the state of Phi 12 . This will result in the fact that Phi 12 will collapse (or flip) into another state. It will immediately have an effect on Particle 3, and Particle 3 will collapse (or be projected, or flip) into one of the four qubit states as we have seen in equations 5 and 6 above. Ofcourse, the Entanglement is gone, and so is the EPR channel. Now note this: While Alice made her measurement, a quantum gate recorded the resulting classical bits that resulted from that measurement on Particles 1 2. Before that measurement, nothing was changed at all. Particle 1 still had its original ket equation 966 1 a1 b0 We only smartly rearranged equation 5 into equation 6 or 7, thats all. Now, its possible that you are not aware of of the fact that quantum gates do exists, which functions as experimental devices, by which we can read out the classical bits that resulted from the measurement of Alice. This is depicted in subfigures 5 and 6 in the figure above. These bits can be transferred in a classical way, using a laser, or any sort of other classical signalling, to Bobs Lab, where he uses a similar gate to reconstruct the state of Particle 3, exactly as the state of particle 1 was directly before Alices measurement. Its an amazing experiment. But it has become a reality in various real experiments. - Note that such an experiment cannot work without an EPR channel, or, one or more entangled particles. Its exactly this feature which will see to it, that Particle 3 will immediately respond (with a collapse), on a measurement far away (in our case: the measurement of Alice on particles 1 2). - Also note that we need a classical way to transfer bits, which encode the state of Particle 1, so that Bob is able to reconstruct the state of Particle 3 into the former state of Partcle 1. This can only work using a classical signal, thus QT does NOT breach Einsteins laws. - Also note that the no cloning theorem was also proven here, since just before Bob was able to reconstruct the state of Particle 1 onto Particle 3, the state of the original partice (particle 1) was destroyed in Alices measurement. - Again, note that both a classical - and a nonclassical (EPR) channel, are required for QT to work. 2. A few words on some operations and notations. Before I describe a mixed state, its probably nice to introduce some common operations and notations. This will be very short, and very informal, with the sole intention to provide for an intuitive understanding of such common operations and descriptions. It can also be important to understand the rest of this note, so I invite you to read this chapter too. If we consider for a moment vectors with real components (instead of complex numbers), some notions can be easily introduced, and get accessible to literally everyone . As a basic assumption, we take following representation as an example of a state 966 931 a i u i , like for example 966 a 1 u 1 a 2 u 2 a 3 u 3 . Especially, I like to give a plausible meaning to the following notations: (1): 60 AB . The inproduct, or inner product, of vectors, usually interpreted as the projection of A on B . (2): B 60 A. usually corresponds to a matrix or linear operator. (3): 966 60 966. corresponds to the Density matrix of a pure state. (4): 60 966 O 966 : corresponds to the expectation value of the observable O. (5): The Trace of an Operator Tr(O) 931 60 i O i Proposition 1: (1): 60 AB . is the inner product, of vectors or Kets. - Inner product of two kets 60 AB If we indeed use the oversimplification in R 3. then a (regular) vector or Ket) B can be viewed as a column vector: Note the elements b i of such a vector. We know that we can represent a vector as a row vector too. In QM, it has a special meaning, called a Bra, as to be the row vector with complex conjugate elements b i . Lets not worry about the term complex conjugate, since you may view it as a sort of mirrored number. And if such an element would be a real number, then the complex conjugate would be the same number anyway. The Bra 60 B can be viewed as a row vector: The inner product, as we know it from linear algebra, operates in QM too. It works the same way. The inner product of the kets A and B (as denoted by Dirac) is then notated as 60AB. From basic linear algebra, we usually write it as A 183 B . or sometimes also as ( a , b ). However, we stick to the braket notation: Which is a number, as we also know from elementary vector calculus. Usually, as an interpretation, 60 AB can be viewed as the length of the projection of A on B. Or, since any vector can be represented by a superposition of basis vectors, then 60934966 i represents the probability that 934 collapses (or projects or change state) to the state 966 i . - Inner product of a ket, with a basivector: 60u i 966 As another nice thing to know is, is that if you calculate the inner product of a (pure) state, like: 966 a 1 u 1 a 2 u 2 a 3 u 3 with one of its basis vectors, say for example u 2 (and this set of basis vectors is orthonormal), then: 60u 2 966 a 1 60u 2 u 1 a 2 60u 2 u 2 a 3 60u 2 u 3 a 2 - Operators: The operator O, as in OB C , meaning O operating on ket B , produces the ket C Ofcourse Operators (mappings) are defined too in Hilbert spaces. Here, they operate on Kets. Indeed, linear mappings, or linear operators, can be associated with matrices . This is no different from what you probably know of vector calculus, or linear algebra. Aqui está um exemplo. Suppose we have the mapping O, and ket B. Then in many cases the mapping actually performs the following: meaning that the columnvector (ket) B is mapped to columnvector C. Or, simply said, the operator O maps the ket B to ket C We can write that as: Above, we see an example of how to multiply a column vector with a row vector, which is a common operation in linear algebra. It simply takes the syntax and outcome as you see above. So, proposition 2 seems to be plausible, since it follows that B 60 A is a matrix. Proposition 3: 966 60 966. corresponds to what is called the Density matrix of a pure state. In proposition 2, we have seen that B 60 A usually produces a matrix. Now, if we take a ket 966 and multiply it with its dual vector, the bra 60 966, as in 966 60 966, then ofcourse it is to be expected we get a matrix again. However, the elements of that matrix are a bit special here, since the elements tell us something about the probability to find that pure state in one of its basis states. In a given basis, the diagonal elements of that matrix, will always represent the probabilities that the state will be found in one of the corresponding basis states. In its most simple form, where we for example have that 966 u 1 u 2 , the density matrix would be: 9484 189 0 9488 9492 0 189 9496 The density matrix is more important, as a description, when talking about mixed states. Proposition 4: 60 966 O 966 : corresponds to the expectation value of the observable O. We can make that plausible in the following way: We have associated a certain observable (such as momentum, position etc..) with a linear operator O. Now suppose for a moment that we have diagonalized the operator, so the only diagonal elements of the matrix, are not 0, and represent the eigenvalues. Then we may use an argument like so: where u i are basis vectors. We can write it as a columnvector too (in our simplification): We are going to show that 60 966 O 966 is the expectation value of O, by making it plausible for a simple case, thereby hoping that you will agree that it is true in general as well. Now suppose O is represented by the matrix: 9484 0 0 0 9488 9474 0 0 0 9474 9492 0 0 1 9496 which result can be read as the weighted average of the eigenvalues. Thus we say that its the expectation value of O. I hope you can see some logic in this. Proposition 4 is however, valid for the general case too. Proposition 5: The Trace of an Operator is: Tr(O) 931 60 u i Ou i . The trace of an Operator, or matrix, is the sum of the diagonal elements. With respect to pure - and mixed states, it has a different outcome (namely 1 or 3 (thus real numners only). In R 3. we can have the following set of orthonormal basisvectors: 9484 1 9488 9474 0 9474 9492 0 9496 9484 0 9488 9474 1 9474 9492 0 9496 9484 0 9488 9474 0 9474 9492 1 9496 You may say that those basis vectors corresponds to u 1 , u 2 , u 3 , like in our usual ket notation. If we consider the rightside of the expression 931 60 u i Ou i , then we have Ou i . We can interpret this as that O operates on a basisvector u i . Suppose that i1, meaning that it is our first basis vectors, just like the set of basisvectors of R 3 , as was listed above. Lets operate our matrix of O, operate on our basisvectors. I will do this only for the (1,0,0) basisvector (i1). For the other two, the same principle applies. So, this will yield: 9484 a b c 9488 9474 d e f 9474 9492 g h i 9496 9484 1 9488 9474 0 9474 9492 0 9496 9484 a1b0c0 9488 9474 d1e0f0 9474 9492 g1h0i0 9496 9484 a 9488 9474 d 9474 9 492 g 9496 Well, this turns out to be the first column vector of the matrix O. Lets call that the vector A (8224). Next, lets see what happens if we perform the leftside of 931 60 u i Ou i . We already had found that the vector A corresponds with Ou i . Using the leftside, we have 60 u i A . This is an inner product, like: 9484 1 9488 9474 0 9474 9492 0 9496 9484 a 9488 9474 d 9474 9492 g 9496 Note that this number a, is the top left element of the matrix O. Since Tr(O) 931 60 iOi , it means that we repeat a similar calculation using all basisvectors, and add up al results. Hopefully you see that this then is the sum of the diagonal elements. I already proved it for the first diagonal element (a), using the first basis vector. The 2 vectors remaining, to be used for a similar calculation, will then produce b and c. In this simple example, we then have Tr(O) a b c . Note that in general Ou i produced the i th column of O (see 8224) above. In the exceptional case where Ou i produces au i , thus a scalar coefficient a times a basisvector, thus Ou i a u i , then in our simple example in R 3. we would have: And, keeping in mind that Ou i the i th column of O (see 8224), then we would have a matrix with only the diagonal elements which are not null, and all others (off diagonal elements), which would then be nul. In such case, it is often said that the elements a, b, and c are the eigenvalues of the operator O. Its absolutely formulated in Jip Janneke language, but I hope you get the picture. Lets try a little more formal derivation, of the Expectation value (proposition 4) and the Trace (proposition 5). 3. Meaning of pure states and mixed states. 3.1 A few words on pure states: While you might think that a completely defined state as 0 , is pure, it holds in general for our well known superpositions . An example of an superpositional state, can be this: You may also view a pure state as a single state vector . as opposed to a mixed state. So, even at this stage, we already may suspect what a mixed state is. Thus a pure state is actually pretty simple. We have seen them before. A mixed state is a statistical mixture of pure states, while superposition refers to a state carrying some other states simultaneously. Although it can be confusing, the term superposition is sort of reserved for pure states. So, our well-known qubit is a pure state too: 966 a 0 b 1 Or as a more general equation, we can write: 966 931 a i u i (equation 8) This is a shorthand notation. Then i runs from 1 to N, or the upper bound might even be infinite. Usually, such a single state vector 966, is thus represented by a vector or ket () notation, and is identified as a certain unknown observable of a single entity, as a single particle. So, a pure state is like a vector (called ket), and this vector be associated with a state of one particle. A pure state is a superposition of eigenstates, like shown in equation 7. Other notes on pure states: Such vectors are also normalized, that is, for the coefficients (a 1 . a 2 . etc..), it holds that a 1 2 a 1 2 . 1 Its also often said that a pure state can deliver you all there is to know about the quantum system, because the systems evolution in time can be calculated, and Operators on pure states work as Projection operators. In sections above, we have also seen that the coefficient a i can be associated with the probability of finding the state to be in the ua i eigenstate (or basisvector) after a measurement has been performed. In general, an often used interpretation of 966, is that it is in a superposition of the basis states simultaneously. Then, the keyword here is simultaneously. However, this interpretation depends on your view of QM, since many interpretations of QM exist. But superpostion will always hold, and is a key term of a pure state (like equation 7). A pure state is still very important, since a single quantum system can be prepared in such state. 3.2 A few words on mixed states: A mixed state, is a mix of pure states. Or formulated a little better: a probability distribution of pure states, is a mixed state. Its an entity that you cannot really describe, using a regular Ket statevector. You must use a density matrix to represent a mixed state. Another good description might be, that it is a statistical ensemble of pure states. So we can think of mixed state as a collection of pure states 966 i , each with associated probability density 961 i . where 0 8804 961 i 8804 1 and 931 961 i 1. In fact mixed states are more commonly used in experiments. For example, when particles are emitted from some source, they might differ in state . In such a case, for one such particle, you can write down the state vector (the Ket). But for a statistical mix of two or more particles, you cannot. The particles are not really connected, and they might individually differ in their (pure) states. What one might do, is create a statitistical mix, what actually boils down in devising the density matrix. The statistical mix, is an ensemble of copies of similar systems, or even an ensemble with respect to time, of similar quantum systems So, you can only write down the density matrix of such an ensemble. In equation 3, we have seen a product state of two kets. Thats not a statistical mix, as we have here with a mixed state. In a certain sense, a mixed state looks like a classical statistical description, of two pure states. When particles are send out by some source, say at some interval, or even sort of continuously, its even possible to write down the equation (density matrix) of two such particles which were emitted at different times. This should illustrate that the component pure states, do not belong to the same wave function, or Ket description. You might see a bra ket-like equation for a mixed state, but then it must have terms like 966 60 981 . which indicate that we are dealing with a density matrix. In general, the density matrix (or state operator) of a (totally) mixed state, should have a format like: Hopefully, you can see something that looks like a statistical mixture here. Here is an example that describes some mix of two pure states a and b : 961 14 a 60 a 34 b 60 b (equation 9) Note that this not an equation like that of a pure state. Ofcourse, some ket equations can be rather complex, so not all terms perse need to have to be in the form 966 60966 . Especially intermediate results can be quite confusing. Then also: by no means this text is complete. Thats obvious ofcourse. For example, partial mixed systems exist too, adding to the difficulties in reckognizing states. A certain class of states are the socalled pseudo-pure families of states. This refers to states formed by mixing any pure state . with the totally mixed state . So, please do not view the discussion above, as comprehensive description of pure and mixed states, which is certainly not the case here. 3.3 What about our entangled two partice system: Equation 4, which described an entangled bipartice system, is repeated here again: 936 187302. ( 01 10 ) Note that this is a normal ket equation, and it is also a superposition. We do not see the characteristic 60 terms which we would expect to see in a mixed state. Therefore, its a pure state There are several perculiar things with such entangled states. We already have seen some in section 1.2, where Alice and Bob performed measurements on the member particles, in their own seperate Labs. Another perculiar thing is this: I will not illustrate it further, but using some mathematical techniques, its possible to trace out the state of one particle, from a two-particle system. - For example, if you would have a normal product state like equation 3, then tracing out particle, like particle 2, just gives the right equation for particle 1. This was probably to be expected, since the product state is seperable. - If you would do the same for an entangled system, then if you try to trace out a particle, then you end up with a mixed state, even though the original state is pure. Thats is really quite remarkable. Later more in this. For now, lets go to the next chapter. 4. The inequalities of Bell, or Bells theorem. 4.1 The original formulation. The famous Bell inequalities (1964), in principle, would make it possible to test if a local realistic theory, like the Local Hidden Variables (LHV) theory, could produce the same results as QM. Or, in stated somewhat differently: No theory of local Hidden Variables can ever reproduce all of the predictions of Quantum Mechanics. Or again stated differently: There is no underlying classical interpretation of quantum mechanics. For about the latter statement, I would like to make a small (really small) reservation, since, say from 2008 (or so), newer parallel universe theories have been developed. Although many dont buy them, the mathematical frameworks and ideas are impressive. In chapter 5, I really like to touch upon a few of them. The Bell theorem was revised at a later moment, by John Clauser, Michael Horne, Abner Shimony and R. A. Holt, which surnames were used in labeling this revision to the CHSH inequality. The CHSH inequality can be viewed as a generalization of the Bell inequalities. Probability, and hidden variables. To a high degree, QM boils down to calculating probabilities of certain outcomes of events. Most physicist, say that QM is intrinsically probabilistic. This weirdness is even enhanced due to remarkable experiments, like the one as decribed in section 1.2. It is true that the effects described in section 1.2, are in conflict with local realism, unless factors play a role of which we are still fully unaware of, like hidden variables. We may say that Einsteins view of a more complete specification of reality, related to QM, is our ignorance of local pre-existing, but unknown, variables. Once these unknown hidden variables are known, the pieces fall together, and the strange probabilistic behaviour can be explained. This then includes an explanation of the strange case as described in section 1.2 (also called the EPR paradox). This is why a possible test between local realism, and the essential ideas of QM, is of enormous importance. It seems that Bell indeed formulated a theoretical basis for such test, based on stochastic principles. I have to say that almost all physicist agree on Bells formulation, and real experiments have been executed, all in favour of QM, and against (local) hidden variables theories. What is the essence of the Bell inequalities In his original paper (Physics Vol. 1, No. 3, pp. 195-290, 1964), Bell starts with a short and accurate description of the problem, and how he wants to approach it. Its really a great intro, declaring exactly what he is planning to do. I advise you to read the secions I and II of his original paper (or read it completely, ofcourse). You can find it here: Bells Theorem, or more accurately, the CHSH inequality, has been put to the test, and also many theoretical work has been done, for example, on n-particle systems, and other more complex forms of entanglement. On the Internet, you can find many (relatively) easy explanations of Bells Theorem. However, the original paper has the additional charm that it explicitly uses local variables, like 955, which stands model for one or more (possibly a continuous range) of variables. His mathematics then explicitly uses 955 in all derivations, and ultimately, it leads to his inequalities. If we consider our experimental setup of section 1.2 again, where Alice and Bob (both in remote Labs), perform measurements again on their member particles, then one important assumption of local realism is, that the result for particle 2 does not depend on any settings (e. g. on the measurement device) in the Lab of particle 1, or the other way around. In both Labs, the measurement should be a local process. Any statistical illusion would then be due, to the distribution of 955, in the respective Labs, as prescribed by a Local Hidden variable theory. The Bell inequalities provide a means to statistically test LHV, against pure QM. In effect, experimental tests which violate the Bells inequalities, are supportive for QM non-locality. Sofar, this is indeed what the tests have delivered. Some folks see the discussion in the light of two large believes: or you believe that signalling is not limited by c, or you believe in super determinism. Super determinism then refers to the situation where any evolution of any entity or process is fully determined. So to speak, as of the birth of the Universe, from where particles and fields snowed from the false vacuum. Interestingly, all particles and other stuff, indeed have a sort of common origin, and thus may have given rise to a super entanglement of all stuff in the Universe. Still unkown variables have then sort of fixed everything, thus a sort of super determinism follows. Personally, I dont buy it. And it seems too narrow too. There are also some newer theories (Chapter 5) which do not directly support super determinism. 4.2 Newer insights on the Bell inequalities and LHVs. - Simultaneous measurements vs non-Simultaneous measurements. Since the second half of the 90s (or so), it seems that newer insights have emerged on Bells Theorem, or at least some questions are asked, or additional remarks are made. One such thought is on how to integrate the Heisenberg relations into the Theorem, and the test results. Here is a good example of such an article: The authors state that near simultaneously measurements, implicitly relies on the Heisenberg uncertainty relations. This is indeed true, since if Alice measures the spin along the z-direction and if she finds up, then we may say that if Bob would also measure his member particle along the z-direction too, then he will certainly find down. Therefore, the full experiment will use (also) axes for Alice and Bob which do not align, but have a variety of different angles. Then, afterwards, all records are collected, and correlations are established, and then using Bells inequalities, we try to see if those inequalities are violated (in which case LHV gets a blow, and QM seems to win). The point of the authors is however, that the measurements will occur at the same time. If now a time element is introduced in the derivation of Bells theorem, a weakening of the upper bound of the Theorem is found. As the main cause of this, the authors make it clear that second-order Broglie-Bohm type of wavefunctions may work as local operators in the Labs of Alice and Bob. I personally cant really find mistakes, apart from the fact that Broglie-Bohm is actually another interpretation of QM, which might not have a place in the argument. However, I am not sure at all. By the way, the Broglie-Bohm pilot wave interpretation, is a very serious interpretation of QM, with many supporting physicists. However, the main point is that the traditional Bell inequalities (or CHSH inequality) in combination with the experimental setup, is not unchallenged (as good physics should indeed operate). - Werner states. Amazingly, as was discovered by Werner, there exist certain entangled states that likely will not violate any Bell inequality, since such states allows a local hidden variable (LHV) model. His treatment (1989) is a theoretical argument, where he first considers the act of preparing states, which are not correlated, thus not entangled, like the example in equation 3 which is a seperable product state. Next, he considers two preparing devices, which have a certain random generator, which makes it possible to generate states where the joint Expectation value . is no longer seperable or factorizable. His artice is from 1989, where at that time it was hold that systems which are not classically correlated, then they are EPR correlated. Using a certain mathematical argumentation, he makes it quite plausible to have a semi-entangled state, or Werner state, which has the look and feel of entanglement, and where a LHV can operate. He admits its indeed a model, but it has triggered several authors to explore this idea in a more general setting. The significance is ofcourse, to have non seperable systems, using a LHV. If you are interested, take a look at his original paper: - Countless other pros and contras: There are many articles, (somewhat) pro - or contra Bells Theorem. Many different arguments are used in the battle. You can found them easily, for example, if you Google with the terms criticism Bells theorem arxiv, where the arxiv will produce the free, uneditorial, scientific papers. Here is one that makes a strong point against LHV, and is very much pro QM: This article is great, since it uses a model of 2 entangled particles without a common origin . and thus this system is very problematic for any type of classical or LHV related theory. I am not suggesting that you should read the complete artice. Contrary, often only the introduction of such articles is good enough, since then the authors outline their intentions and arguments. So, what do we have up to now Sofar, what we have seen in section 1.2 (EPR entangled bi-particle experiment) and 1.4 (Quantum Teleportation), is that something that behaves like an immediate action at a distance, seems to be at work. This does not suggest that any form of signallingcommunication exist, that surpasses the speed of light. As said in section 1.2, the no communication theorem states exactly that. However, not all folks would agree on this. By the way, the QT effect we saw in section 1.4, simply also needed a classical channel in order to transport the state of particle 1, to particle 3 at Bobs place. That is also supporting the view, that true information transfer does not go faster than c. There exists a number of interpretations of QM, like e. g. the Broglie-Bohm pilot-wave interpretation. Rather recently, also newer parallel universe models were proposed, with a radical different view on QM. For about the latter: you might find that strange, but some models are pretty strong. The most commonly used interpretation, is the one that naturally uses superpositions of states. That model works, and is used all over the World. For example, most articles have no problem at all in writing a state (Ket) as a superposition of basis states, like in a pure state, as we have seen in section 2.1. In fact, once describing QM in the framework of Hilbertspaces (which are vectorspaces), superpostion is then sort of imposed or un-avoidable. But ofcourse, the very first descriptions using wave-functions to describe particles and quantum systems in general, is very much the same type of formulation. And this vector formulation, fits the original postulates of QM, quite well. But it seems quite fair to say that it is actually just this principle of superposition . that has put us in this rather weird situation, where we still cannot fully and satisfactory understand, exactly why we see what think we see as was described in section 1.2 (or the lightgreen text above). Not all physicist like the non-locality stuff as displayed in the lightgreen text above. For quite a few, a Hidden Variable theory (or similar theory) is not dead at all. Although the experimental evidence using the Bell tests seems rather convincing, there still seems to exist quite some of counter arguments. For now, we stay on the pure QM path (superpositions, EPR non-locality, probabilities, Operators, Projectors etc..), and how most people then nowadays interpret Quantum Steering, Entanglement, and Bell non-locality. Lets go to the next section. 5. Steering, Entanglement, and Bell non-locality. 5.1 Some descriptions: Lets first try to describe steering: Quantum Steering: Quantum steering is the ability of Alice to perform a measurements on her local member of an entangled system, with different outcomes, and that leads to different states for the remote part of that entangled system (at Bobs Lab), independend of any distance between them. How did I came up with such a nice description Here you can find an article of the man who used such text for the first time (Schrodinger, 1935), as a response to Einsteins EPR paper: (in the document, of the url above, you might scroll down a bit, to view the article) If I may quote a nice paragraph from that article: (when he is dicussing two remote members of an entangled system, or entanglement in general. ) . It is rather discomforting that the theory should allow a system to be steered or piloted into one or the other type of state at the experimenters mercy in spite of his having no access to it. This paper does not aim at a solution of the paradox, it rather adds to it, if possible. A hint as regards the presumed obstacle will be found at the end. Schrodinger already considered (or suspected) the case (as described in section 1.2), that the result that Alice measures, instantaneously steers what Bob will find. Althoug in section 1.2 we saw steering at work, I also like to try to discuss a modern test too, involving steering, and this all under the operational definitions as listed below. Many questions are left open at this point, among which are: - Can Alice steer Bob - Can Bob steer Alice - Does two-way steering exists - What is the difference when pure systems and mixed systems are considered - Does all types of entangled systems, enable steering We are not too far of from possible answers. Lets next try to describe entanglement and Bell non-locality. Entanglement: When 2 or more particles can be described as a product state (like equation 3), they are seperable. A measurement of an observable on one particle, is independent of the other particles. You can always seperate the original ket (of a certain particle) from the product state. In many cases however, two or more particles are fully intertwined (with respect to some observable), in such way, that you cannot seperate one particle from the other(s). A measurement on one particle, effects the other particle(s) too. A state as for example in equation 4, describes both particles (together in SpaceTime). They truly have a common (inseperable) state. Bell non-locality: This seems to apply for any situation, for which QM violates the Bell inequalities. So, it seems to be a very broad description. You might say that entangled states as in sections 1.2 and 1.4, fall under the non-locality description. How about steering Seems that this too, as a subset, is smaller than the notion of non-locality. But this is not correct. The exact difference, or applicability, between steering, entanglement, and Bell nonlocality, was not so much of a very hard issue in the minds of physicists, so it seems. We have to admit that steering, entanglement and Bell nonlocality, seemed to have much overlap in their meanings. Well, it proved to be not entirely true. Then, in 2006, the following article appeared: by Wiseman, Jones, and Doherty. They gave a pretty solid description for Steering, Entanglement, Nonlocality, in the sense of when such term applies . As the authors say themselves: they provided (sort of) operational definitions. The statements above with respect to the relative place (as subsets or supersets) of steering, entanglement, and nonlocality, were not corrects. As the article points out: Proposition 1: - We need entanglement to enable quantum steering. - But not all entangled systems provide conditions for quantum steering. The above sounds rather logical, since quantum steering, or EPR steering, is pretty much involved, and just seems to be a rather hard quality for true a non-classical phenomenon. The authors formulate it this way: Steerable states are a strict subset of the entangled states. So, if you would regard this from the perspective of Venn diagrams, then Steerable states lie within entangled states. Or, in other words: the existence of entanglement is necessary but not sufficient for steering. Thus: steering is deeper than just entanglement, although entanglement is required. Proposition 2: - Steering is a strict superset of the states that can exhibit Bell-nonlocality. This thus would imply that steering could happen in a local setting, which might be percieved as quite amazing. In other words: in a Bell local setting (thus NOT nonlocal), steering is possible too. Or, and this is important, some steerable states do not violate the Bell inequalities. As we shall see a while later, if we would only consider pure states . the original equivalence holds to a large extent. But considering mixed states too . leads to the propositions above. I recommend to read (at least) the first page of this article. True, all these sorts of scientific papers are rather spicy, but already on page one, the authors are able to explain what they want to achieve. 5.2 Entanglement Sudden Death: Maybe the following contributes to evaluating entanglement. ou não. However, its an effect that has been observed (as of 2006) in certain situations. Early-stage disentanglement or ESD, is often called Entanglement Sudden Death in order to stress the rapid decay of entanglement of systems. It does not involve perse all types of quantum systems, which are entangled. Ofcourse, any sort of state will interact with the environment in time, and decoherence has traditionally been viewed as a threat, in for example, Quantum Computing. ESD however, involves the very rapid decay of the entangled pairs of particles, that is, the entanglement itself seem to dissipate very fast, maybe due to classical andor quantum noise. But the fast rate itself, which indeed has been measured for some systems, has surprised many physicists working in the Quantum field. Ofcourse, it is known that any system will at some time (one way or the other) interact with the environment. Indeed, a general phenomenon as decoherence is almost unavoidable. Its simply not possible to fully isolate a quantum system from the environment. This even holds for a system in Vacuum. Even intrinsic quantum fluctuations has been suggested as a source for ESD. However, many see as the source for the fast decay, the rather normal local noise, as e. g. background radiation. Yu and Eberly have produced quite a few articles on the subject. The sudden loss of entanglement between subsystems may be even explained in terms of how the environment seems to select a preferred basis for the system, thus in effect aborting the entanglement. Just like decoherence, ESD might also play a role in a newer interpretation of the measurement process. Whether it is noise or something else, its reported quick rate is still not fully understood. A good overview (but not very simple) can be found in the following article: To make it still more mysterious, an entanglement decay might be followed by an entanglement re-birth, in systems, observed in some experimental setups, with the purpose of studying ESD. A re-birth might happen in case of applying random noise, or when both systems are considered to be embedded in a bath of noise or other sort of thermal background. Many studies have been performed, including pure theoretical and experimental studies. A more recent article, describing the behaviour of entanglement under random noise, can be found below: As usual, I am not suggesting that you read the complete article. This time, I invite you to go to the Conclusion in the article, just to get a taste of the remarkable results. 5.3 Types of entanglement: Ofcourse, this whole text is pretty much lightweight, so if I cant find something, it does not mean a lot. So far, as I am able to observe, there is no complete method to truly systematically group entangled states into clear categories. There probably exist two main perspectives here. The perspective of formal Quantum Information Theory, in which, more than just occasionally, the physics is abstracted away. This is not a black-and-white statement ofcourse. Pure physics, that is, theoretical - and experimental research. Both sciences deliver a wealth of knowledge, and often must overlap, and often also are complementary in initiating ideas and concepts. So what types of entanglement, physicists have seen, or theoretici have conjectured 1. Pure - and mixed states can be entangled. For pure states, a general statement is, that an entangled state is one that is not a product state. Rather equivalent, is the statement: a state is called entangled if it is not separable. Mixed states can be entangled too. This is somewhat more complex, and in section 5.4 I will try do a lightweight discussion. 2. The REE distance, or strength of entanglement. Relative Entropy of Entanglement (REE) is based on the distance of the state to the closest separable state. It is not really a distance, but the relative entropy of entanglement . E R compared to the entropy of the nearest, or most similar separable state. In Physics Letters A, december 1999, Matthew J. Donalda, and Michal Horodecki, found that if two states are close to each other, then so are their entanglements per particle pair, if indeed they were going to be entangled. Over the years after, the idea was getting more and more refined, leading to the notion of REE. So, its an abstract measure of the strength of entanglement. Its an area of active research. Intuitively, its not too hard to imagine that for nonentagled states, E R 0, and for strong entangled states E R - 1. So, in general, one might say that 0 8804 E R 8804 1. You could find arguments that this is a way, to classify entangled states. 3. Bi-particle and Multi-particle entanglement. By itself, the distinction between a n2 particle system, and a n 2 system, is a way to classify or to distinguish between types of entanglement. Indeed, point 1 above, does not fully apply to multiparticle entanglement. In a n 2 system we can have fully separable states ofcourse, and also fully entangled states However, there also exists the notion of partially separable states. In ket notation, you might think of an equation like this: 936 966 1 8855 981 2,3 and suppose we cannot seperate 981 2,3 any further, then 936, which then is only separeated in the factors 966 1 and 981 2,3 , is a partially separable state . 4. Classification according to polytopes. When the number of particles (or entities) in a quantum system increases, the way entanglement might be organized, is getting very complex. While with n2 and n3 systems, its still quite manageble, with n 3, the complexity of possible entangled states, can get enourmous (exponentially with n). In 2012, an article appeared, in which the authors explicitly target multi-particle systems, which can expose a large number of different forms of entanglement. The authors showed that entanglement information of the system as a whole, can be obtained from a single member particle . The key is the following: The quantum correlation of the whole system N, affects the single - or local particle density matrices 961(1). 961(N) which relate to the reduced density matrices of the global quantum state. Thus using information from one member alone, delivers information about the entanglement of the global quantum state. From the the reduced density matrices, which thus also correspond to the density matrices 961(1). 961(N) of one member particle, the eigenvalues 955 N can be obtained. Amazingly, using the relative sizes of 955 N . a geometric polyhedron can be constructed which corresponds to an entanglement class. From this different geometric polyhedrons (visually like trapeziums) at least stronger and weaker entanglement classes can be calculated. Using a local member this way, you might say that this single member acts like a witness to the global quantum state. If you like more information, you might want to take a look at the original article of the authors Walter, Doran, Gross, and Christandl: 5.4 Steering and entanglement: The pure - and mixed cases: 6. A few words on the measurement problem. This will be very short section. But I hope to say something useful on this extensive subject. Certainly, due to chapter 7, the role of the observer and the measurement problem, simply must be addressed. Its in fact a very difficult subject, and many physicists and philosophers broke their heads on this stuff. Its not really about in-accuracies in instruments and devices. One of core problems is the intrinscic probability in QM, and certain rules which have proven to be in effect, such as the Heisenberg uncertainty relations. And indeed, on top of this, is the problem of the exact role of the observer. Do not underestimate the importance of that last statement. There exists a fairly large number of (established) interpretations of QM, and the role of the observer varies rather dramatically over some interpretations. Whatever ones vision on QM is, its rather unlikely that it is possible to detach the observer completely from certain QM events and related observations, although undoubtly some people do believe so. So, I think there are at least five or six points to consider: Intrinscic probability of QM. Uncertainty relations, and non-commuting observables. Role of the observer. Disturbive (strong) measurements vs weak measurements. The quantum description of the measurement process. Decoherence and pre-selection of states nearin the measuring device. The problem is intrinsic to QM, and in many ways un-classical. For example point 6, if you are familiar with decoherence, then you know that a quantum system will always interact with the enivironment. And in particular, at or near your measurement device, decoherence takes place, and a process like pre-selection of states may take place. In a somewhat exaggerated formulation: the specific environment of your lab, may unravel your quantum system in a certain way, and dissipates certain other substates. Could that vary over different measurement devices, and with different environments What does that say, in general, about experimental results The subject is still somewhat controversial. A nice article is the following one. Its really quite large, but reading the first few pages already gives a good taste on the subject. You can also search for articles of Zurek and co-workers. Role of the Observer: And ofcourse, the famous or infamous problem of the role of the observer. For example, are you and the measurement apparatus connected in some way Maybe that sounds somewhat hazy, but some folks even study the psychological - and physiological state of the observer, with respect to measurements. I dont dare to say anything on such studies, but you should not dismiss them. A comprehensive analysis of making an observation, and certain choices, is very complex, and is probably not fully undestood. But there exists more factual and mathematical considerations. For example, what is generally understood with the Heisenberg uncertainty relations A few words on the Heisenberg uncertainty relations: The strange case of section 1.2, and the role of measurements: 7. (Apparantly) Strange new ideas. 1057 PC Amsterdam The Netherlands KvK: 37125573 tel: Int: (0031)(0)6 2060 4148 NL: 06 2060 4148 mail: albertvanderselzonnet. nl absrantapex. org Any questions or remarks Then contact me at. albertvanderselzonnet. nl Site maintained by: Albert van der Sel last update: 9 Januari, 2017 Nederlandstalige paginas: Klik aub hier voor enkele andere Nederlandstalige paginas.

No comments:

Post a Comment